repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
pytorch/pytorch | 5,642 | pytorch__pytorch-5642 | [
"1777"
] | e9fffb5579e570d31a256fde7e387d3d8d40b845 | diff --git a/tools/autograd/gen_python_functions.py b/tools/autograd/gen_python_functions.py
--- a/tools/autograd/gen_python_functions.py
+++ b/tools/autograd/gen_python_functions.py
@@ -17,7 +17,7 @@
'alias', 'contiguous', 'clamp.*', 'is_cuda', 'is_sparse', 'size', 'stride',
'.*_backward', '.*_backward_out', '.*_forward', '.*_forward_out',
'sparse_raw_resize_', '_unsafe_view', 'tensor', 'sparse_coo_tensor',
- '_arange.*', '_range.*', '_linspace.*', '_logspace.*'
+ '_arange.*', '_range.*', '_linspace.*', '_logspace.*', '_indexCopy_',
]
PY_VARIABLE_METHODS_CPP = CodeTemplate.from_file(template_path + '/python_variable_methods.cpp')
| index_copy inconsistent behavior on CPU and GPU
`index_copy` has inconsistent behaviors for the shape of the tensor which is used to copy on CPU and GPU. In following examples, `index_copy` raises an error on CPU due to the invalid shape of `y`, but it works well on GPU for same tensors `x`, `y`, `z`.
```
In [1]: import torch
In [2]: from torch.autograd import Variable
In [3]: x = torch.ones(3,1)
In [4]: y = torch.Tensor([4])
In [5]: z = torch.LongTensor([2])
In [6]: x.index_copy_(0, z, y)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-a1189d076165> in <module>()
----> 1 x.index_copy_(0, z, y)
RuntimeError: cannot select on a vector at /users1/xwgeng/pytorch/torch/lib/TH/generic/THTensor.c:403
In [7]: x = x.cuda()
In [8]: y = y.cuda()
In [9]: z = z.cuda()
In [10]: x.index_copy_(0, z, y)
Out[10]:
1
1
4
[torch.cuda.FloatTensor of size 3x1 (GPU 0)]
```
The pytorch version is `0.1.12+d7db75c`
| 2018-03-08T19:51:25 |
||
pytorch/pytorch | 5,662 | pytorch__pytorch-5662 | [
"5661"
] | a33aeed1dce5d2ccab0662375a9bbe203cba994d | diff --git a/torch/nn/modules/rnn.py b/torch/nn/modules/rnn.py
--- a/torch/nn/modules/rnn.py
+++ b/torch/nn/modules/rnn.py
@@ -319,7 +319,7 @@ class LSTM(RNNBase):
\begin{array}{ll}
i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\
f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\
- g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hc} h_{(t-1)} + b_{hg}) \\
+ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) \\
o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\
c_t = f_t c_{(t-1)} + i_t g_t \\
h_t = o_t \tanh(c_t)
| Tiny bug in the documentation of the LSTM formula
Hi,
Looking at the formulas of LSTM in [documentation](http://pytorch.org/docs/master/nn.html#torch.nn.LSTM), in the third equation (g_t), the suffix of the second W should be changed from W_{hc} to W_{hg}. The correct formula:
g_t=tanh(W_{ig}x_t+b_{ig}+W{h**g**}h_{(tβ1)}+b_{hg})
Cheers,
Navid
| 2018-03-09T14:51:46 |
||
pytorch/pytorch | 5,726 | pytorch__pytorch-5726 | [
"5718"
] | 542fbcc127e2b6a230901a77bdfd68365b780bf1 | diff --git a/torch/nn/modules/pooling.py b/torch/nn/modules/pooling.py
--- a/torch/nn/modules/pooling.py
+++ b/torch/nn/modules/pooling.py
@@ -777,8 +777,8 @@ class LPPool2d(Module):
.. math::
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
- - At p = infinity, one gets Max Pooling
- - At p = 1, one gets Average Pooling
+ - At p = infinity, one gets Max Pooling
+ - At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)
The parameters :attr:`kernel_size`, :attr:`stride` can either be:
@@ -841,8 +841,8 @@ class LPPool1d(Module):
.. math::
f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}
- - At p = infinity, one gets Max Pooling
- - At p = 1, one gets Average Pooling
+ - At p = infinity, one gets Max Pooling
+ - At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)
Args:
kernel_size: a single int, the size of the window
| Small bug in LPPool2d documentation
The math rendering doesn't end where it should, making the text after it difficult to read:
http://pytorch.org/docs/master/nn.html#torch.nn.LPPool2d
<img width="719" alt="screen shot 2018-03-12 at 5 07 03 pm" src="https://user-images.githubusercontent.com/6576216/37309613-d339a60c-2617-11e8-80a9-96e7a7bc1e93.png">
| Thanks for the report! Would you be interested in submitting a PR that fixes it?
Sure, I can give it a try! | 2018-03-13T01:19:50 |
|
pytorch/pytorch | 5,746 | pytorch__pytorch-5746 | [
"5598"
] | e9fffb5579e570d31a256fde7e387d3d8d40b845 | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1775,6 +1775,40 @@ def callable(a, b) -> number
In-place version of :meth:`~Tensor.trunc`
""")
+add_docstr_all('type',
+ r"""
+type(dtype=None, non_blocking=False, **kwargs) -> str or Tensor
+Returns the type if `dtype` is not provided, else casts this object to
+the specified type.
+
+If this is already of the correct type, no copy is performed and the
+original object is returned.
+
+Args:
+ dtype (type or string): The desired type
+ non_blocking (bool): If ``True``, and the source is in pinned memory
+ and destination is on the GPU or vice versa, the copy is performed
+ asynchronously with respect to the host. Otherwise, the argument
+ has no effect.
+ **kwargs: For compatibility, may contain the key ``async`` in place of
+ the ``non_blocking`` argument. The ``async`` arg is deprecated.
+""")
+
+add_docstr_all('type_as',
+ r"""
+type_as(tensor) -> Tensor
+
+Returns this tensor cast to the type of the given tensor.
+
+This is a no-op if the tensor is already of the correct type. This is
+equivalent to::
+
+ self.type(tensor.type())
+
+Params:
+ tensor (Tensor): the tensor which has the desired type
+""")
+
add_docstr_all('unfold',
r"""
unfold(dim, size, step) -> Tensor
diff --git a/torch/_utils.py b/torch/_utils.py
--- a/torch/_utils.py
+++ b/torch/_utils.py
@@ -18,7 +18,7 @@ def _type(self, dtype=None, non_blocking=False, **kwargs):
asynchronously with respect to the host. Otherwise, the argument
has no effect.
**kwargs: For compatibility, may contain the key ``async`` in place of
- the ``non_blocking`` argument.
+ the ``non_blocking`` argument. The ``async`` arg is deprecated.
"""
non_blocking = _get_async_or_non_blocking('type', non_blocking, kwargs)
if dtype is None:
| Type() and type_as() are missing documentation
As can be seen in http://pytorch.org/docs/master/tensors.html
Best :)
| Note that we had docs for these functions in PyTorch 0.3. Most likely, I accidentally deleted them in the tensor & variable merge. | 2018-03-13T19:21:25 |
|
pytorch/pytorch | 5,747 | pytorch__pytorch-5747 | [
"5741"
] | effc568ceee2ee8cc2a5bdff85bc904e557919cf | diff --git a/tools/autograd/load_derivatives.py b/tools/autograd/load_derivatives.py
--- a/tools/autograd/load_derivatives.py
+++ b/tools/autograd/load_derivatives.py
@@ -289,6 +289,11 @@ def saved_variables(formula, args):
'suffix': lambda m: '_argsize_{}'.format(*m.groups()),
'type': 'int64_t',
}),
+ # replace self.numel() with self_numel
+ (r'{}.numel\(\)', {
+ 'suffix': '_numel',
+ 'type': 'int64_t',
+ }),
# replace to_arg_sizes(self, 2) with self_argsizes_2
(r'to_arg_sizes\({}, (\w+)\)', {
'suffix': lambda m: '_sizes_{}'.format(*m.groups()),
| Save `self.numel()` for backwards computation
Right now, the following derivatives.yaml declaration:
```
- name: mean(Tensor self)
self: grad.expand(self.sizes()) / self.numel()
```
produces the following backwards:
```
variable_list MeanBackward1::apply(const variable_list& grads) {
IndexRangeGenerator gen;
auto self_ix = gen.range(1);
variable_list grad_inputs(gen.size());
auto& grad = grads[0];
auto self = self_.unpack();
if (should_compute_output({ self_ix })) {
auto grad_result = grad.expand(self_sizes) / self.numel();
copy_range(grad_inputs, self_ix, grad_result);
}
return grad_inputs;
}
```
It looks like `self.sizes()` is being saved for the backward pass, but `self.numel()` isn't, requiring saving `self` for backwards.
| 2018-03-13T20:15:12 |
||
pytorch/pytorch | 5,764 | pytorch__pytorch-5764 | [
"5601"
] | 1709484a40785b87c545b6d4c354ce19ee9d5813 | diff --git a/torch/nn/_functions/linear.py b/torch/nn/_functions/linear.py
deleted file mode 100644
--- a/torch/nn/_functions/linear.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import torch
-from torch.autograd import Function, Variable
-
-
-class Bilinear(Function):
-
- @staticmethod
- def forward(ctx, input1, input2, weight, bias=None):
- ctx.save_for_backward(input1, input2, weight, bias)
-
- output = input1.new(input1.size(0), weight.size(0))
-
- buff = input1.new()
-
- # compute output scores:
- for k, w in enumerate(weight):
- torch.mm(input1, w, out=buff)
- buff.mul_(input2)
- torch.sum(buff, 1, keepdim=True, out=output.narrow(1, k, 1))
-
- if bias is not None:
- output.add_(bias.expand_as(output))
-
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- input1, input2, weight, bias = ctx.saved_variables
- grad_input1 = grad_input2 = grad_weight = grad_bias = None
-
- buff = Variable(input1.data.new())
-
- if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
- grad_input1 = torch.mm(input2, weight[0].t())
- grad_input1 = grad_input1.mul(grad_output.narrow(1, 0, 1).expand(grad_input1.size()))
- grad_input2 = torch.mm(input1, weight[0])
- grad_input2 = grad_input2.mul(grad_output.narrow(1, 0, 1).expand(grad_input2.size()))
-
- for k in range(1, weight.size(0)):
- buff = input2.mm(weight[k].t())
- buff = buff.mul(grad_output.narrow(1, k, 1).expand(grad_input1.size()))
- grad_input1.add_(buff)
-
- buff = input1.mm(weight[k])
- buff = buff.mul(grad_output.narrow(1, k, 1).expand(grad_input2.size()))
- grad_input2.add_(buff)
-
- grad_weight = Variable(weight.data.new(weight.size()))
- if ctx.needs_input_grad[2]:
- # accumulate parameter gradients:
- for k in range(weight.size(0)):
- buff = input1.mul(grad_output.narrow(1, k, 1).expand_as(input1))
- grad_weight[k] = torch.mm(buff.t(), input2)
-
- if bias is not None and ctx.needs_input_grad[3]:
- grad_bias = grad_output.sum(0, keepdim=False)
-
- return grad_input1, grad_input2, grad_weight, grad_bias
diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -9,7 +9,6 @@
from torch._C import _infer_size, _add_docstr
from . import _functions
from .modules import utils
-from ._functions.linear import Bilinear
from ._functions.padding import ConstantPadNd
from ._functions import vision
from ._functions.thnn.fold import Col2Im, Im2Col
@@ -1001,10 +1000,7 @@ def linear(input, weight, bias=None):
def bilinear(input1, input2, weight, bias=None):
- if bias is None:
- return Bilinear.apply(input1, input2, weight)
- else:
- return Bilinear.apply(input1, input2, weight, bias)
+ return torch._C._VariableFunctions.bilinear(input1, input2, weight, bias)
def embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2,
diff --git a/torch/nn/modules/linear.py b/torch/nn/modules/linear.py
--- a/torch/nn/modules/linear.py
+++ b/torch/nn/modules/linear.py
@@ -73,8 +73,11 @@ class Bilinear(Module):
Default: ``True``
Shape:
- - Input: :math:`(N, \text{in1_features})`, :math:`(N, \text{in2_features})`
- - Output: :math:`(N, \text{out_features})`
+ - Input: :math:`(N, *, \text{in1_features})`, :math:`(N, *, \text{in2_features})`
+ where :math:`*` means any number of additional dimensions. All but the last
+ dimension of the inputs should be the same.
+ - Output: :math:`(N, *, \text{out_features})` where all but the last dimension
+ are the same shape as the input.
Attributes:
weight: the learnable weights of the module of shape
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -4145,6 +4145,13 @@ def test_bilinear(self):
_assertGradAndGradgradChecks(self, lambda x1, x2: F.bilinear(x1, x2, module.weight, module.bias),
(input1_1, input2_1))
+ def test_bilinear_broadcasting(self):
+ m = nn.Bilinear(5, 6, 8)
+ input1 = torch.randn(2, 3, 5)
+ input2 = torch.randn(2, 3, 6)
+ expected = m(input1.view(6, 5), input2.view(6, 6)).view(2, 3, 8)
+ self.assertEqual(expected, m(input1, input2))
+
def test_conv_tbc(self):
inp = Variable(torch.randn(9, 4, 5), requires_grad=True)
weight = Variable(torch.randn(3, 5, 6), requires_grad=True)
| I think Bilinear() should support 3D tensor like Linear() does
When I execute following code, it is gonna get
> RuntimeError: matrices expected, got 3D, 2D tensors
`import torch.nn as nn`
`m = nn.Bilinear(2, 3, 4)`
`input1 = Variable(torch.randn(5, 3, 2))`
`input2 = Variable(torch.randn(5, 3, 3))`
`output = m(input1, input2)`
`print(output.size())`
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-23-b247cd5ad730> in <module>()
----> 1 output = m(input1, input2)
2 print(output.size())
~/anaconda3/envs/deepchem/lib/python3.5/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
323 for hook in self._forward_pre_hooks.values():
324 hook(self, input)
--> 325 result = self.forward(*input, **kwargs)
326 for hook in self._forward_hooks.values():
327 hook_result = hook(self, input, result)
~/anaconda3/envs/deepchem/lib/python3.5/site-packages/torch/nn/modules/linear.py in forward(self, input1, input2)
110
111 def forward(self, input1, input2):
--> 112 return F.bilinear(input1, input2, self.weight, self.bias)
113
114 def __repr__(self):
~/anaconda3/envs/deepchem/lib/python3.5/site-packages/torch/nn/functional.py in bilinear(input1, input2, weight, bias)
845 return Bilinear.apply(input1, input2, weight)
846 else:
--> 847 return Bilinear.apply(input1, input2, weight, bias)
848
849
~/anaconda3/envs/deepchem/lib/python3.5/site-packages/torch/nn/_functions/linear.py in forward(ctx, input1, input2, weight, bias)
15 # compute output scores:
16 for k, w in enumerate(weight):
---> 17 torch.mm(input1, w, out=buff)
18 buff.mul_(input2)
19 torch.sum(buff, 1, keepdim=True, out=output.narrow(1, k, 1))
RuntimeError: matrices expected, got 3D, 2D tensors at /opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/TH/generic/THTensorMath.c:1411
| 2018-03-14T02:15:41 |
|
pytorch/pytorch | 5,819 | pytorch__pytorch-5819 | [
"5552"
] | 940a0ab67bc933b0d6c24540a9e3100d213eec49 | diff --git a/tools/autograd/load_derivatives.py b/tools/autograd/load_derivatives.py
--- a/tools/autograd/load_derivatives.py
+++ b/tools/autograd/load_derivatives.py
@@ -294,10 +294,10 @@ def saved_variables(formula, args):
'suffix': '_numel',
'type': 'int64_t',
}),
- # replace to_arg_sizes(self, 2) with self_argsizes_2
- (r'to_arg_sizes\({}, (\w+)\)', {
- 'suffix': lambda m: '_sizes_{}'.format(*m.groups()),
- 'type': 'IntList',
+ # replace to_args_sizes(self) with self_args_sizes
+ (r'to_args_sizes\({}\)', {
+ 'suffix': '_args_sizes',
+ 'type': 'std::vector<std::vector<int64_t>>',
}),
# replace TensorGeometry(self) with self_geometry
(r'TensorGeometry\({}\)', {
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -2305,7 +2305,9 @@ def test_cat_bad_input_sizes(self):
def test_cat_scalars(self):
x = torch.tensor(0)
y = torch.tensor(1)
- self.assertRaises(RuntimeError, lambda: torch.cat([x, y]))
+ with self.assertRaisesRegexp(RuntimeError,
+ 'zero-dimensional.*cannot be concatenated'):
+ torch.cat([x, y])
def test_stack(self):
x = torch.rand(2, 3, 4)
| RuntimeError: dimension specified as 0 but tensor has no dimensions
My code run well under torch 0.2, however, the following error occurs when I use torch 0.3.1, why?
Traceback (most recent call last):
File "../train.py", line 149, in <module>
loss.backward()
File "/home/jxliu/.local/lib/python3.6/site-packages/torch/autograd/variable.py", line 167, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/jxliu/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: dimension specified as 0 but tensor has no dimensions
| Is this easily reproducible? It's likely because we changed the shape of reduction operations (and indexing a 1-dimensional tensor) to be 0-dimensional (with a value) instead of 1-dimensional.
I met this problem when using `torch.cat` on a list of 0-dim Tensor. It had worked on previous versions, what should I do to fix this?
@zou3519 you looked at this right?
I looked at it but haven't thought about it much yet. There's code right now on master that prevents a 0-dim tensor from going into `torch.cat` (did we have 0-dim tensors in pytorch 0.3?), but it can still take empty tensors (`tensor.dim() == 1 and tensor.numel() == 0`) that have been causing issues: #5332 #5739
@paduvi are you on master? If you could post a reproduction that would be helpful.
@gchanan yes, I'm current on master.
This is minimum code:
```
x = [Variable(torch.rand(1), requires_grad=True)] * 10 # input
y = [Variable(torch.rand(1), requires_grad=True)] * 10 # input
z = map(lambda a, b: torch.matmul(a, b), x, y)
# on master it print ()
# at previous version it print (1,)
print(z[0].size())
torch.cat(z, dim=0)
```
@paduvi what map are you using? This is what I get when I run your code
```
In [2]: import torch
In [3]: from torch.autograd import Variable
In [4]: x = [Variable(torch.rand(1), requires_grad=True)] * 10 # input
...: y = [Variable(torch.rand(1), requires_grad=True)] * 10 # input
...:
...: z = map(lambda a, b: torch.matmul(a, b), x, y)
...:
...: # on master it print ()
...: # at previous version it print (1,)
...: print(z[0].size())
...:
...: torch.cat(z, dim=0)
...:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-86f566c9a3d7> in <module>()
6 # on master it print ()
7 # at previous version it print (1,)
----> 8 print(z[0].size())
9
10 torch.cat(z, dim=0)
TypeError: 'map' object is not subscriptable
```
@zou3519 `map` returns a list in Python 2, so you probably need to change it to `list(map(...))` | 2018-03-15T19:33:03 |
pytorch/pytorch | 5,840 | pytorch__pytorch-5840 | [
"5586"
] | c474136ee1b8bcb7c8270f35815073cab2046d66 | diff --git a/torch/autograd/profiler.py b/torch/autograd/profiler.py
--- a/torch/autograd/profiler.py
+++ b/torch/autograd/profiler.py
@@ -536,7 +536,7 @@ def parse_nvprof_trace(path):
row['kernel_start'],
row['kernel_end'])
- functions.sort(key=lambda evt: evt.start)
+ functions.sort(key=lambda evt: evt.cpu_interval.start)
return functions
| load_nvprof error
I got the following error when I load_nvprof.
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/opt/pytorch/torch/autograd/profiler.py", line 286, in load_nvprof
> return EventList(parse_nvprof_trace(path))
> File "/opt/pytorch/torch/autograd/profiler.py", line 539, in parse_nvprof_trace
> functions.sort(key=lambda evt: evt.start)
> File "/opt/pytorch/torch/autograd/profiler.py", line 539, in <lambda>
> functions.sort(key=lambda evt: evt.start)
> AttributeError: 'FunctionEvent' object has no attribute 'start'
pytorch version:
0.4.0a0+b69b885
The function works properly on 0.3.1.
| 2018-03-16T21:19:35 |
||
pytorch/pytorch | 5,941 | pytorch__pytorch-5941 | [
"5925"
] | 2a02ec6537e8028fb1db8765a266782893636a6f | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -255,6 +255,7 @@ def check_file(f):
check_file(os.path.join(lib_path, "pybind11", "CMakeLists.txt"))
check_file(os.path.join('aten', 'src', 'ATen', 'cpu', 'cpuinfo', 'CMakeLists.txt'))
check_file(os.path.join('aten', 'src', 'ATen', 'cpu', 'tbb', 'tbb_remote', 'Makefile'))
+ check_file(os.path.join('aten', 'src', 'ATen', 'utils', 'catch', 'CMakeLists.txt'))
check_pydep('yaml', 'pyyaml')
check_pydep('typing', 'typing')
| Source build issue: Missing catch.hpp
Trying to build master (a3bd7b2), I'm seeing
```
.../pytorch/aten/src/ATen/test/basic.cpp:2:21: fatal error: catch.hpp: No such file or directory`
```
I tried `git submodule update` in `.../pytorch` to no avail.
| NM, also needed a `git submodule init`.
Need a check to see if this is folder is checked out in `setup.py`, like the other submodules. Should be a simple patch. | 2018-03-22T15:18:03 |
|
pytorch/pytorch | 6,001 | pytorch__pytorch-6001 | [
"5973"
] | 2df578a71af3fedf84717a112ef9c18335a01d4f | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -462,11 +462,14 @@ def run(self):
extra_link_args = []
if IS_WINDOWS:
- extra_compile_args = ['/Z7', '/EHa', '/DNOMINMAX'
+ extra_compile_args = ['/Z7', '/EHa', '/DNOMINMAX', '/wd4267', '/wd4251', '/wd4522',
+ '/wd4522', '/wd4838', '/wd4305', '/wd4244', '/wd4190',
+ '/wd4101', '/wd4996', '/wd4275'
# /Z7 turns on symbolic debugging information in .obj files
# /EHa is about native C++ catch support for asynchronous
# structured exception handling (SEH)
# /DNOMINMAX removes builtin min/max functions
+ # /wdXXXX disables warning no. XXXX
]
if sys.version_info[0] == 2:
# /bigobj increases number of sections in .obj file, which is needed to link
| Windows build logs are 7M a pop
That's pretty big! In contrast, an OS X build log is only 1.3M. Is there a reason they have to be this big?
CC @peterjc123 @yf225
| My guess is that it's because of warnings. I remember having to get through thousands of them when looking for the error that failed the build.
Yes, the major part of them are warnings. And some of them is not fixable. We can hide them using some flags on the compilers.
And this build configuration actually contains both the CPU and GPU build, which may also be a possible reason. | 2018-03-26T04:30:11 |
|
pytorch/pytorch | 6,072 | pytorch__pytorch-6072 | [
"5554"
] | 1807bacd658d83ab3455f32b15a41654eaba8f1d | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -1365,7 +1365,7 @@ def nll_loss(input, target, weight=None, size_average=True, ignore_index=-100, r
in case of 2D Loss, or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K > 1`
in the case of K-dimensional loss.
target: :math:`(N)` where each value is :math:`0 \leq \text{targets}[i] \leq C-1`,
- or :math:`(N, C, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for
+ or :math:`(N, d_1, d_2, ..., d_K)` where :math:`K \geq 1` for
K-dimensional loss.
weight (Tensor, optional): a manual rescaling weight given to each
class. If given, has to be a Tensor of size `C`
@@ -1388,6 +1388,12 @@ def nll_loss(input, target, weight=None, size_average=True, ignore_index=-100, r
dim = input.dim()
if torch.is_tensor(weight):
weight = weight
+ if dim < 2:
+ raise ValueError('Expected 2 or more dimensions (got {})'.format(dim))
+
+ if input.size(0) != target.size(0):
+ raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
+ .format(input.size(0), target.size(0)))
if dim == 2:
return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
elif dim == 4:
@@ -1405,8 +1411,6 @@ def nll_loss(input, target, weight=None, size_average=True, ignore_index=-100, r
return torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce)
out = torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce)
return out.view(out_size)
- else:
- raise ValueError('Expected 2 or more dimensions (got {})'.format(dim))
def poisson_nll_loss(input, target, log_input=True, full=False, size_average=True, eps=1e-8, reduce=True):
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -2935,6 +2935,13 @@ def _test_loss_equal_input_target_shape(self, cast):
def test_loss_equal_input_target_shape(self):
self._test_loss_equal_input_target_shape(lambda x: x)
+ def test_NLLLoss_mismatched_batch(self):
+ x = torch.randn((10, 3), requires_grad=True)
+ # t should have size (10,)
+ t = torch.zeros((3,), dtype=torch.int64)
+ with self.assertRaisesRegex(ValueError, 'Expected.*batch_size'):
+ F.nll_loss(x, t)
+
def test_RNN_cell_no_broadcasting(self):
def test(cell_module, input, hx, input_size, hidden_size):
cell = cell_module(input_size, hidden_size)
| dimension out of range (expected to be in range of [-1, 0], but got 1)
```python
criterion = nn.CrossEntropyLoss()
print(outputs.data)
print(label.data)
loss = criterion(outputs, label) # getting error at this point
```
The output that I'm getting is =>
```
0.0174
0.1866
[torch.FloatTensor of size 2]
0
1
[torch.FloatTensor of size 2]
```
This is correct as far as the documentation is concern, but still getting following error
```dimension out of range (expected to be in range of [-1, 0], but got 1)```
| From the docs:

I'm assuming your C = 2 and N = 2. Then target should be a tensor of size (2,) while output should be a tensor of size (2, 2).
@zou3519 is right, but letβs keep this issue open until we fix the error message
Thanks @zou3519,
That means, I need to have the dimension of the `outputs` tensor (1,2) and label tensor (1) as I am evaluating single example at a time, right ??
@pskanade you have two labels, yes? Then your output should be of size `(2, 2)`, where `outputs[0]` contains two elements that are the probability that `outputs[0]` is in either class. Same for `outputs[1]`. | 2018-03-28T16:22:34 |
pytorch/pytorch | 6,084 | pytorch__pytorch-6084 | [
"6011"
] | 7e1046ce83c5e48dbb3583a964b59a08dfb98762 | diff --git a/torch/jit/__init__.py b/torch/jit/__init__.py
--- a/torch/jit/__init__.py
+++ b/torch/jit/__init__.py
@@ -524,10 +524,14 @@ def __getattr__(self, attr):
return self.module._get_method(attr)
-def script(fn):
- rcb = createResolutionCallback()
+def _script_graph(fn, frame_id=2):
+ rcb = createResolutionCallback(frame_id)
ast = get_jit_ast(fn)
- graph = _jit_script_compile(ast, rcb)
+ return _jit_script_compile(ast, rcb)
+
+
+def script(fn):
+ graph = _script_graph(fn, frame_id=3)
return torch._C.GraphExecutor(graph, True)
| diff --git a/test/expect/TestJit.test_shape_analysis_broadcast.expect b/test/expect/TestJit.test_shape_analysis_broadcast.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestJit.test_shape_analysis_broadcast.expect
@@ -0,0 +1,7 @@
+graph(%a : Double(3, 1, 5)
+ %b : Double(4, 1, 8, 5)) {
+ %3 : Double(4!, 3!, 8!, 5) = aten::expand[size=[4, 3, 8, 5]](%a)
+ %4 : Double(4!, 3!, 8, 5) = aten::expand[size=[4, 3, 8, 5]](%b)
+ %2 : Double(4, 3, 8, 5) = aten::add[alpha={1}](%3, %4)
+ return (%2);
+}
diff --git a/test/test_jit.py b/test/test_jit.py
--- a/test/test_jit.py
+++ b/test/test_jit.py
@@ -1412,6 +1412,17 @@ def test_integral_shape_inference(a):
self.assertEqual(cu.test_integral_shape_inference(*inputs), outputs)
+ def test_shape_analysis_broadcast(self):
+ def broadcast(a, b):
+ return a + b
+
+ x = torch.randn(3, 1, 5, requires_grad=True)
+ y = torch.randn(4, 1, 8, 5, requires_grad=True)
+
+ graph = torch.jit._script_graph(broadcast)
+ torch._C._jit_pass_shape_analysis(graph, (x, y), False)
+ self.assertExpected(str(graph))
+
def test_fuser_multiple_blocks(self):
cu = torch.jit.CompilationUnit('''
def test_fuser_multiple_blocks(this, that, theother, meme):
| Add expand nodes to point-wise operators generated by @script
@script doesn't insert expand nodes, but our fuser assumes that point-wise operators cannot broadcast, which is unsafe. We need to fix this in a way makes the fuser safe, while still having it produce good fusions for @script nodes. This can be a modification to shape analysis that is aware of the (relatively small) set of point-wise operators that do broadcasting.
```
# before analysis
y = rand(1,4)
z = rand(4,4)
x = exp(add(y, z)) # broadcasting add
# using the result of shape analysis, we insert expand nodes and replace
# broadcasting add with a non-broadcasting variant:
# after
y = rand(1,4)
z = rand(4,4)
x = exp(add_nobroadcast(expand_to(y, z), z)) # non-broadcasting add, nobroadcast could be an attribute if that turns out to work.
# fuser is modified to only work on non-broadcasting adds.
```
| @apaszke | 2018-03-28T21:32:01 |
pytorch/pytorch | 6,086 | pytorch__pytorch-6086 | [
"5602"
] | a90aa5d818c994d108d5c64a9a74f0fe732af662 | diff --git a/torch/nn/modules/instancenorm.py b/torch/nn/modules/instancenorm.py
--- a/torch/nn/modules/instancenorm.py
+++ b/torch/nn/modules/instancenorm.py
@@ -11,6 +11,23 @@ def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False,
def _check_input_dim(self, input):
return NotImplemented
+ def _load_state_dict_key_mismatch(self, full_name, name, is_missing):
+ if not is_missing and not self.track_running_stats and \
+ name in ('running_mean', 'running_var'):
+ raise KeyError(
+ 'Unexpected running stats buffer "{name}" in state_dict for '
+ '{klass} with track_running_stats=False. If you are trying to '
+ 'load a checkpoint saved before 0.4.0, this may be expected '
+ 'because {klass} does not track running stats by default '
+ 'anymore since 0.4.0. If running stats are not used, remove '
+ 'them from state_dict before calling load_state_dict. '
+ 'Otherwise, set track_running_stats=True in {klass} to use '
+ 'running stats. See the documentation of {klass} for more '
+ 'details.'
+ .format(name=full_name, klass=self.__class__.__name__))
+ super(_InstanceNorm, self)._load_state_dict_key_mismatch(
+ full_name, name, is_missing)
+
def forward(self, input):
self._check_input_dim(input)
diff --git a/torch/nn/modules/module.py b/torch/nn/modules/module.py
--- a/torch/nn/modules/module.py
+++ b/torch/nn/modules/module.py
@@ -507,6 +507,20 @@ def state_dict(self, destination=None, prefix='', keep_vars=False):
module.state_dict(destination, prefix + name + '.', keep_vars=keep_vars)
return destination
+ def _load_state_dict_key_mismatch(self, full_name, name, is_missing):
+ r"""This is called in :meth:`~torch.nn.Module.load_state_dict` when
+ there is state dict key mismatch in ``strict=True`` mode. This method
+ can be overridden by subclasses to raise class-specific errors.
+
+ When :attr:`is_missing` is ``True``, :attr:`full_name` can not be found in
+ the dict being loaded. When :attr:`is_missing` is ``False``,
+ :attr:`full_name` is unexpected in the dict being loaded.
+
+ :attr:`name` is the actual name of the parameter/buffer, i.e., the
+ substring after the last `dot` in :attr:`full_name`.
+ """
+ pass
+
def load_state_dict(self, state_dict, strict=True):
r"""Copies parameters and buffers from :attr:`state_dict` into
this module and its descendants. If :attr:`strict` is ``True`` then
@@ -520,6 +534,17 @@ def load_state_dict(self, state_dict, strict=True):
match the keys returned by this module's `:func:`state_dict()`
function.
"""
+ def submodule_key_mismatch(full_name, is_missing):
+ module = self
+ names = full_name.split(".")
+ for module_name in names[:-1]:
+ if module_name in module._modules:
+ module = module._modules[module_name]
+ else:
+ return
+ module._load_state_dict_key_mismatch(full_name, names[-1], is_missing)
+
+ unexpected = []
own_state = self.state_dict()
for name, param in state_dict.items():
if name in own_state:
@@ -534,12 +559,24 @@ def load_state_dict(self, state_dict, strict=True):
'whose dimensions in the checkpoint are {}.'
.format(name, own_state[name].size(), param.size()))
elif strict:
- raise KeyError('unexpected key "{}" in state_dict'
- .format(name))
+ unexpected.append(name)
if strict:
missing = set(own_state.keys()) - set(state_dict.keys())
+ # pass the mismatch info to submodules so that they have a chance to
+ # raise a custom class-specific error
+ for name in unexpected:
+ submodule_key_mismatch(name, False)
+ for name in missing:
+ submodule_key_mismatch(name, True)
+ error_msg = ''
+ if len(unexpected) > 0:
+ error_msg += 'Unexpected key(s) in state_dict: {}. '.format(
+ ', '.join('"{}"'.format(k) for k in unexpected))
if len(missing) > 0:
- raise KeyError('missing keys in state_dict: "{}"'.format(missing))
+ error_msg += 'Missing key(s) in state_dict: {}. '.format(
+ ', '.join('"{}"'.format(k) for k in unexpected))
+ if len(error_msg) > 0:
+ raise KeyError(error_msg)
def parameters(self):
r"""Returns an iterator over module parameters.
| [bug?] Problem with load_state_dict after installing latest pytorch by source
- OS: MacOS Sierra
- PyTorch version: 0.4.0a0+7588893
- How you installed PyTorch (conda, pip, source): source
- Python version: Python 2.7.14 |Anaconda, Inc.|
- CUDA/cuDNN version: No Cuda
- GPU models and configuration: No GPU models
- GCC version (if compiling from source): GCC 4.2.1 Compatible Clang 4.0.1
In order to use onnx on nn.InstanceNorm2d layers, I installed the latest pytorch by source following the instruction.
Before installing the latest pytorch from source. I trained some pytorch model using the pytorch version installed through http://pytorch.org/. I was able to save the trained model as .pth file and load it back in successfully by doing the following:
`model = myNet(parameters...)
model.load_state_dict(torch.load('xxx.pth'))`
However, after I install the latest pytorch (from source). The load_state_dict is reporting errors:
> File "/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 528, in load_state_dict
.format(name))
KeyError: 'unexpected key "pre_model.1.running_mean" in state_dict'
I did the following to check which layer was causing the problem
`model1 = myNet(parameters...)`
`model2 = torch.load('xxx.pth')`
`for name, param in model2.items():`
` print(name)`
`for name, param in model1.state_dict().items():`
` print(name)`
I found that for the instanceNorm2D layers, model2 contains xxx.running_mean and xxx.running_var, but not model1. Any idea what I should do?
| The `running_*` are disabled by default for InstanceNorm* layers after #4922 . You can add `track_running_stats=True` to the InstanceNorm* layer constructors.
that works. Thank you. | 2018-03-28T22:05:47 |
|
pytorch/pytorch | 6,100 | pytorch__pytorch-6100 | [
"3164"
] | 4c511075c3782e96d49f16a32344a1aba3c0fd8f | diff --git a/tools/autograd/gen_python_functions.py b/tools/autograd/gen_python_functions.py
--- a/tools/autograd/gen_python_functions.py
+++ b/tools/autograd/gen_python_functions.py
@@ -21,7 +21,7 @@
'sparse_coo_tensor', '_arange.*', '_range.*', '_linspace.*', '_logspace.*',
'_indexCopy_', 'max_values', 'min_values', 'argmax', 'argmin',
'_cumsum.*', '_cumprod.*', '_sum.*', '_prod.*', '_th_sum.*', '_th_prod.*',
- 'arange.*', 'range.*',
+ 'arange.*', 'range.*', '_gesv.*',
]
PY_VARIABLE_METHODS_CPP = CodeTemplate.from_file(template_path + '/python_variable_methods.cpp')
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -1713,7 +1713,7 @@ def parse_kwargs(desc):
add_docstr(torch.gesv,
r"""
-gesv(B, A, out=None) -> (Tensor, Tensor)
+torch.gesv(B, A) -> (Tensor, Tensor)
This function returns the solution to the system of linear
equations represented by :math:`AX = B` and the LU factorization of
@@ -1721,21 +1721,28 @@ def parse_kwargs(desc):
`LU` contains `L` and `U` factors for LU factorization of `A`.
-:attr:`A` has to be a square and non-singular matrix (2-D tensor).
+`torch.gesv(B, A)` can take in 2D inputs `B, A` or inputs that are
+batches of 2D matrices. If the inputs are batches, then returns
+batched outputs `X, LU`.
-If `A` is an :math:`(m \times m)` matrix and `B` is :math:`(m \times k)`,
-the result `LU` is :math:`(m \times m)` and `X` is :math:`(m \times k)`.
+.. note::
+
+ The `out` keyword only supports 2D matrix inputs, that is,
+ `B, A` must be 2D matrices.
.. note::
Irrespective of the original strides, the returned matrices
- `X` and `LU` will be transposed, i.e. with strides `(1, m)`
- instead of `(m, 1)`.
+ `X` and `LU` will be transposed, i.e. with strides like
+ `B.contiguous().transpose(-1, -2).strides()` and
+ `A.contiguous().transpose(-1, -2).strides()` respectively.
Args:
- B (Tensor): input matrix of :math:`(m \times k)` dimensions
- A (Tensor): input square matrix of :math:`(m \times m)` dimensions
- out (Tensor, optional): optional output matrix
+ B (Tensor): input matrix of size :math:`(*, m, k)` , where `*`
+ is zero or more batch dimensions.
+ A (Tensor): input square matrix of size :math:`(*, m, m)`, where
+ `*` is zero or more batch dimensions.
+ out ((Tensor, Tensor), optional): optional output tuple.
Example::
@@ -1751,6 +1758,15 @@ def parse_kwargs(desc):
>>> torch.dist(B, torch.mm(A, X))
tensor(1.00000e-06 *
7.0977)
+
+ >>> # Batched solver example
+ >>> A = torch.randn(2, 3, 1, 4, 4)
+ >>> B = torch.randn(2, 3, 1, 4, 6)
+ >>> X, LU = torch.gesv(B, A)
+ >>> torch.dist(B, A.matmul(X))
+ tensor(1.00000e-06 *
+ 3.6386)
+
""")
add_docstr(torch.get_default_dtype,
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2869,6 +2869,10 @@ class dont_convert(tuple):
('svd', lambda: random_fullrank_matrix_distinct_singular_value(M), NO_ARGS,
'large', NO_ARGS, [skipIfNoLapack]),
('gesv', (S, S), ((S, S),), '', NO_ARGS, [skipIfNoLapack]),
+ ('gesv', (S, S, S), ((S, S, S),), 'batched', NO_ARGS, [skipIfNoLapack]),
+ ('gesv', (2, 3, S, S), ((2, 3, S, S),), 'batched_dims', NO_ARGS, [skipIfNoLapack]),
+ ('gesv', (2, 2, S, S), ((1, S, S),), 'batched_broadcast_A', NO_ARGS, [skipIfNoLapack]),
+ ('gesv', (1, S, S), ((2, 2, S, S),), 'batched_broadcast_b', NO_ARGS, [skipIfNoLapack]),
('fill_', (S, S, S), (1,), 'number'),
('fill_', (), (1,), 'number_scalar'),
# FIXME: we should compute the derivative w.r.t torch.tensor(1)
diff --git a/test/test_cuda.py b/test/test_cuda.py
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -1320,6 +1320,14 @@ def _select_broadcastable_dims(dims_full=None):
def test_det_logdet_slogdet(self):
TestTorch._test_det_logdet_slogdet(self, lambda t: t.cuda())
+ @unittest.skipIf(not HAS_MAGMA, "no MAGMA library detected")
+ def test_gesv_batched(self):
+ TestTorch._test_gesv_batched(self, lambda t: t.cuda())
+
+ @unittest.skipIf(not HAS_MAGMA, "no MAGMA library detected")
+ def test_gesv_batched_dims(self):
+ TestTorch._test_gesv_batched_dims(self, lambda t: t.cuda())
+
def test_view(self):
TestTorch._test_view(self, lambda t: t.cuda())
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -3319,6 +3319,7 @@ def test_gesv(self):
res1 = torch.gesv(b, a)[0]
self.assertLessEqual(b.dist(torch.mm(a, res1)), 1e-12)
+
ta = torch.Tensor()
tb = torch.Tensor()
res2 = torch.gesv(b, a, out=(tb, ta))[0]
@@ -3337,6 +3338,103 @@ def test_gesv(self):
torch.gesv(b, a, out=(tb, ta))[0]
self.assertEqual(res1, tb)
+ @staticmethod
+ def _test_gesv_batched(self, cast):
+ # test against gesv: one batch
+ A = cast(torch.randn(1, 5, 5))
+ b = cast(torch.randn(1, 5, 10))
+ x_exp, LU_exp = torch.gesv(b.squeeze(0), A.squeeze(0))
+ x, LU = torch.gesv(b, A)
+ self.assertEqual(x, x_exp.unsqueeze(0))
+ self.assertEqual(LU, LU_exp.unsqueeze(0))
+
+ # test against gesv in a loop: four batches
+ A = cast(torch.randn(4, 5, 5))
+ b = cast(torch.randn(4, 5, 10))
+
+ x_exp_list = list()
+ LU_exp_list = list()
+ for i in range(4):
+ x_exp, LU_exp = torch.gesv(b[i], A[i])
+ x_exp_list.append(x_exp)
+ LU_exp_list.append(LU_exp)
+ x_exp = torch.stack(x_exp_list)
+ LU_exp = torch.stack(LU_exp_list)
+
+ x, LU = torch.gesv(b, A)
+ self.assertEqual(x, x_exp)
+ self.assertEqual(LU, LU_exp)
+
+ # basic correctness test
+ A = cast(torch.randn(3, 5, 5))
+ b = cast(torch.randn(3, 5, 10))
+ x, LU = torch.gesv(b, A)
+ self.assertEqual(torch.matmul(A, x), b)
+
+ # Test non-contiguous inputs.
+ if not TEST_NUMPY:
+ return
+ import numpy
+ from numpy.linalg import solve
+ A = cast(torch.randn(2, 2, 2)).permute(1, 0, 2)
+ b = cast(torch.randn(2, 2, 2)).permute(2, 1, 0)
+ x, _ = torch.gesv(b, A)
+ x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))
+ self.assertEqual(x.data, cast(x_exp))
+
+ @skipIfNoLapack
+ def test_gesv_batched(self):
+ self._test_gesv_batched(self, lambda t: t)
+
+ @staticmethod
+ def _test_gesv_batched_dims(self, cast):
+ if not TEST_NUMPY:
+ return
+
+ import numpy
+ from numpy.linalg import solve
+
+ # test against numpy.linalg.solve
+ A = cast(torch.randn(2, 1, 3, 4, 4))
+ b = cast(torch.randn(2, 1, 3, 4, 6))
+ x, _ = torch.gesv(b, A)
+ x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))
+ self.assertEqual(x.data, cast(x_exp))
+
+ # test column major format
+ A = cast(torch.randn(2, 1, 3, 4, 4)).transpose(-2, -1)
+ b = cast(torch.randn(2, 1, 3, 6, 4)).transpose(-2, -1)
+ assert not A.is_contiguous()
+ assert not b.is_contiguous()
+ x, _ = torch.gesv(b, A)
+ x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))
+ self.assertEqual(x.data, cast(x_exp))
+
+ # broadcasting b
+ A = cast(torch.randn(2, 1, 3, 4, 4))
+ b = cast(torch.randn(4, 6))
+ x, _ = torch.gesv(b, A)
+ x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))
+ self.assertEqual(x.data, cast(x_exp))
+
+ # broadcasting A
+ A = cast(torch.randn(4, 4))
+ b = cast(torch.randn(2, 1, 3, 4, 2))
+ x, _ = torch.gesv(b, A)
+ x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))
+ self.assertEqual(x.data, cast(x_exp))
+
+ # broadcasting both A & b
+ A = cast(torch.randn(1, 3, 1, 4, 4))
+ b = cast(torch.randn(2, 1, 3, 4, 5))
+ x, _ = torch.gesv(b, A)
+ x_exp = torch.Tensor(solve(A.cpu().numpy(), b.cpu().numpy()))
+ self.assertEqual(x.data, cast(x_exp))
+
+ @skipIfNoLapack
+ def test_gesv_batched_dims(self):
+ self._test_gesv_batched_dims(self, lambda t: t)
+
@skipIfNoLapack
def test_qr(self):
| [Feature Request] batched gesv
I would like to have access to a batched version of gesv. MAGMA provides magma_sgesv_batched and magma_dgesv_batched methods but they, and all the MAGMA batched methods, require the creation of a magma_queue.
I am willing to help implement this feature but I don't have any experience with cuda programming and I am uncertain how to incorporate the creation of a magma_queue. If there were a general way of managing magma queues in pytorch it would ease the incorporation of other MAGMA batched routines.
| I don't think we're using MAGMA queues anywhere at the moment. Are they expensive to create?
A MAGMA queue wraps a CUDA queue and has associated handles to a CUDA stream, cuBLAS, and cuSparse. On my system with a 1080 Ti, the first time a queue is created it takes ~270 ms, subsequent queues take ~0.47 ms but they all share the same CUDA stream and cuBLAS handle. The cuSparse handle changes with each queue creation but I think that is because I am not using cuSparse so the handle leads nowhere.
Since there seems to be relatively little penalty to repeated calls to create queues, it might be reasonable to just create the queue in each wrapper for batched MAGMA functions. I will try wrapping magma_dgesv_batched with MAGMA queue creation and cleanup inside the wrapper. | 2018-03-29T03:39:41 |
pytorch/pytorch | 6,113 | pytorch__pytorch-6113 | [
"2992"
] | bb114bc05dfea7e455d7c64c435d8b1627b57a0b | diff --git a/torch/nn/parallel/scatter_gather.py b/torch/nn/parallel/scatter_gather.py
--- a/torch/nn/parallel/scatter_gather.py
+++ b/torch/nn/parallel/scatter_gather.py
@@ -57,6 +57,11 @@ def gather_map(outputs):
return Gather.apply(target_device, dim, *outputs)
if out is None:
return None
+ if isinstance(out, dict):
+ if not all((len(out) == len(d) for d in outputs)):
+ raise ValueError('All dicts must have the same number of keys')
+ return type(out)(((k, gather_map([d[k] for d in outputs]))
+ for k in out))
return type(out)(map(gather_map, zip(*outputs)))
# Recursive function calls like this create reference cycles.
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -2054,6 +2054,18 @@ def test_gather_cpu(self):
def test_gather_gpu(self):
self._test_gather(0)
+ @unittest.skipIf(not TEST_MULTIGPU, "multi-GPU not supported")
+ def test_gather_different_len_dicts(self):
+ inputs = (
+ {'a': Variable(torch.randn(1, 2).cuda(0), requires_grad=True)},
+ {
+ 'b': Variable(torch.randn(1, 2).cuda(1), requires_grad=True),
+ 'a': Variable(torch.randn(1, 2).cuda(1), requires_grad=True)
+ }
+ )
+ with self.assertRaises(ValueError):
+ _ = dp.gather(inputs, target_device=0)
+
def _test_broadcast_double_backwards(self, *tensors):
variables = tuple(Variable(t, requires_grad=True) for t in tensors)
_assertGradAndGradgradChecks(self, lambda *i: Broadcast.apply((0, 1), *i), variables)
@@ -2297,7 +2309,10 @@ def test_data_parallel_sparse(self):
@unittest.skipIf(not TEST_MULTIGPU, "multi-GPU not supported")
def test_data_parallel_nested_output(self):
def fn(input):
- return [input, (input.sin(), input.cos(), [input.add(1)]), input]
+ return [
+ input, (input.sin(), input.cos(), [input.add(1)]), input,
+ {'a': input, 'b': [input.sin()]}
+ ]
class Net(nn.Module):
def forward(self, input):
@@ -2314,6 +2329,13 @@ def forward(self, input):
self.assertIsInstance(output[1][2], list)
self.assertIsInstance(output[1][2][0], Variable)
self.assertIsInstance(output[2], Variable)
+ self.assertIsInstance(output[3], dict)
+ self.assertEqual(len(output[3]), 2)
+ self.assertIn('a', output[3])
+ self.assertIn('b', output[3])
+ self.assertIsInstance(output[3]['a'], Variable)
+ self.assertIsInstance(output[3]['b'], list)
+ self.assertIsInstance(output[3]['b'][0], Variable)
@unittest.skipIf(not TEST_MULTIGPU, "multi-GPU not supported")
def test_data_parallel_nested_input(self):
| DataParallel Gather works only with iterable outputs
I'm running a network across multiple GPUs and pass the input data through a dictionary. The Variables stored as items of the input dictionary are properly scattered across the batch dimension, and the forward pass terminates correctly.
However, when returning the output as a dictionary I get the following runtime error:
```
torch/nn/parallel/scatter_gather.pyc in gather_map(outputs)
47 if out is None:
48 return None
---> 49 return type(out)(map(gather_map, zip(*outputs)))
50 return gather_map(outputs)
torch/nn/parallel/scatter_gather.pyc in gather_map(outputs)
43 def gather_map(outputs):
44 out = outputs[0]
---> 45 if isinstance(out, Variable):
46 return Gather(target_device, dim=dim)(*outputs)
47 if out is None:
RuntimeError: maximum recursion depth exceeded in __instancecheck__
```
The reason is that the function `gather_map` in [scatter_gather.py](https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/scatter_gather.py#L46-L52) only supports Variables or iterables of variables as its input.
However, `scatter_map` in [scatter_gather.py](https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/scatter_gather.py#L21-L23) also supports dictionaries.
Is there a reason for this discrepancy? Would it be useful if I made a pull request and added this functionality?
I am implementing a network with multiple sub-networks whose outputs may or may not be computed (based on a config file) and it would be useful to be able to pass all of them in a compact way out of the data parallel wrapper.
**SNIPPET REPLICATING ERROR:**
```
import torch
import torch.nn as nn
from torch.autograd import Variable
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.block1 = nn.Linear(10, 20)
self.block2 = nn.Linear(20, 20)
def forward(self, x):
x = self.block1(x)
x = self.block2(x)
return x
class MyModelDictInput(nn.Module):
def __init__(self):
super(MyModelDictInput, self).__init__()
self.block1 = nn.Linear(10, 20)
self.block2 = nn.Linear(20, 20)
def forward(self, d):
x = d['an_input']
x = self.block1(x)
x = self.block2(x)
return x
class MyModelDictOutput(nn.Module):
def __init__(self):
super(MyModelDictOutput, self).__init__()
self.block1 = nn.Linear(10, 20)
self.block2 = nn.Linear(20, 20)
def forward(self, x):
x = self.block1(x)
x = self.block2(x)
d = dict()
d['an_output'] = x
return d
# create random input
i = Variable(torch.rand((4,10)))
d = {'an_input':i}
# example 1:
print('input is a Variable, output is a Variable')
net = nn.DataParallel(MyModel()).cuda()
o = net.forward(i)
print(o)
# example 2:
print('input is a dict, output is a Variable')
net = nn.DataParallel(MyModelDictInput()).cuda()
o = net.forward(d)
print(o)
# example 3:
print('input is a Variable, output is a dict')
net = nn.DataParallel(MyModelDictOutput()).cuda()
o = net.forward(i)
print(o)
```
| Has anyone ever taken a look at this? Thanks!
This would help simplifying our framework.
We are now using the method mentioned in [Training network with multiple outputs with multi gpus](https://discuss.pytorch.org/t/training-network-with-multiple-outputs-with-multi-gpus/6344), but it is a bit hacky. | 2018-03-29T17:55:11 |
pytorch/pytorch | 6,152 | pytorch__pytorch-6152 | [
"2006"
] | 762eb3ddc8be9dcd9345d232c4a3b7bc278e99c2 | diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -4094,7 +4094,8 @@ def parse_kwargs(desc):
.. function:: sum(input, dim, keepdim=False, out=None) -> Tensor
Returns the sum of each row of the :attr:`input` tensor in the given
-dimension :attr:`dim`.
+dimension :attr:`dim`. If :attr::`dim` is a list of dimensions,
+reduce over all of them.
If :attr:`keepdim` is ``True``, the output tensor is of the same size
as :attr:`input` except in the dimension :attr:`dim` where it is of size 1.
@@ -4103,7 +4104,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the input tensor
- dim (int): the dimension to reduce
+ dim (int or tuple of ints): the dimension or dimensions to reduce
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
out (Tensor, optional): the output tensor
@@ -4117,6 +4118,9 @@ def parse_kwargs(desc):
[ 0.3637, -0.9906, -0.4752, -1.5197]])
>>> torch.sum(a, 1)
tensor([-0.4598, -0.1381, 1.3708, -2.6217])
+ >>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
+ >>> torch.sum(b, (2, 1))
+ tensor([ 435., 1335., 2235., 3135.])
""")
add_docstr(torch.svd,
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2621,6 +2621,8 @@ class dont_convert(tuple):
('sum', (), NO_ARGS, 'scalar'),
('sum', (), (0,), 'scalar_dim', [0]),
('sum', (), (0, True,), 'scalar_keepdim_dim', [0]),
+ ('sum', (S, S, S), ([1, 2],), 'multi_dim'),
+ ('sum', (S, S, S), ([1, 2], True,), 'multi_dim_keepdim'),
('prod', (S, S, S), NO_ARGS),
('prod', (S, S, S), (1,), 'dim', [0]),
('prod', (S, S, S), (1, True,), 'keepdim_dim', [0]),
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -1503,6 +1503,8 @@ def make_tensors(*shape):
check_sum_dim(make_tensors(50, 50, 50), 0)
check_sum_dim(make_tensors(50, 50, 50), 1)
check_sum_dim(make_tensors(50, 50, 50), 2)
+ check_sum_dim(make_tensors(50, 50, 50), (1, 2))
+ check_sum_dim(make_tensors(50, 50, 50), (1, -1))
def make_contiguous_slice(size, dtype):
contig = make_contiguous((1, size), dtype)
@@ -1522,6 +1524,11 @@ def test_sum_out(self):
res2 = torch.Tensor()
torch.sum(x, 1, out=res2)
self.assertEqual(res1, res2)
+ x = torch.rand(100, 100, 100)
+ res1 = x.sum(2).sum(1)
+ res2 = torch.Tensor()
+ torch.sum(x, (2, 1), out=res2)
+ self.assertEqual(res1, res2)
# TODO: these tests only check if it's possible to pass a return value
# it'd be good to expand them
| Tensor.sum() over multiple axes
During writing `einsum` I've found that `sum` is able to sum only over single axis. I'm going to add a simple implementation of it:
```
def sum(input, axes, keepdim=False):
# probably some check for uniqueness of axes
if keepdim:
for ax in axes:
input = input.sum(ax, keepdim=True)
else:
for ax in sorted(axes, reverse=True):
input = input.sum(ax)
return input
```
1. Is it ok?
2. Where should it be better placed?
| Looks very useful
- Numpy uses `keepdims`, is there a reason to use `keepdim` name for parameter?
- probably, same should be done to `torch.prod`
- does it make sense to change `tensor.sum`, `variable.sum` as well?
Right, we should rename keepdim to keepdims if we do this (and support it for both variables and tensors). Also should do it for all functions with a keepdim parameter.
If we want to add reduction over multiple axis, I think we should permute the axis so that they are consecutive, make contiguous and view those axis as one single dimension, so that we can perform the reduction over the reduced dimension. That would be more efficient than perfoming several reductions.
@fmassa, it depends. .contiguous() for 3+ dimensional tensors is very slow now.
@ngimel hum, that's a good point. Also, we will rarely perform reductions over more than 3 axes, so the number of extra calls is small.
I've created a draft. Currently it checks number of dimensions. But it seems that choosing only one of the two (permute vs multiple calls to `.sum()`) solutions is better idea.
As @arogozhnikov mentioned: do we need to add similar implementation of `torch.prod`?
~We don't need whole tensor to be contiguous.~
~If we perform 3D `tensor.sum((1, 2))`, can reduce `sum` calls like~
```python
if tensor.stride(2)*tensor.size(2) == tensor.stride(1):
output = tensor.view(tensor.size(0), -1).sum(1)
else:
output = tensor.sum(2).sum(1)
```
And I think `mean`, `median`, `mode`, `min`, `max`, `std`, `var`, (and `norm`?) should have same signature for consistency.
**Edit:** I didn't know `view` can be performed only on contiguous tensor...
`contiguous` call is needed as @fmassa mentioned.
While we can avoid `contiguous` call like below.
```python
order = np.argsort(tensor.stride())[::-1].tolist()
permuted = tensor.permute(*order)
if permuted.is_contiguous():
axes = tuple(map(lambda x: order[x], axes))
# contiguous version
else:
# not contiguous version
``` | 2018-03-30T20:00:53 |
pytorch/pytorch | 6,185 | pytorch__pytorch-6185 | [
"6175"
] | 92a0f7835e7e2f6342fe2fd32a25299ea86aed31 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -52,6 +52,10 @@
# specify the version of PyTorch, rather than the hard-coded version
# in this file; used when we're building binaries for distribution
#
+# TORCH_CUDA_ARCH_LIST
+# specify which CUDA architectures to build for.
+# ie `TORCH_CUDA_ARCH_LIST="6.0;7.0"`
+#
# Environment variables we respect (these environment variables are
# conventional and are often understood/set by other software.)
#
| enabling older computing capabilities like 5.0
I am using ubuntu 16.04 and I am trying to compile pytorch and I want to enable 5.0 cuda computing capability where to set it - in what file or maybe in environment variable?
| The environment variable is `TORCH_CUDA_ARCH_LIST`. You can build with `TORCH_CUDA_ARCH_LIST=5.0 python setup.py build develop`.
This should probably be added to the README (or setup.py, where the other common env variables are documented). | 2018-04-02T13:54:52 |
|
pytorch/pytorch | 6,201 | pytorch__pytorch-6201 | [
"5616"
] | 6b3a4637d6e87a543b3df6edb825c07d7dabe611 | diff --git a/tools/autograd/gen_python_functions.py b/tools/autograd/gen_python_functions.py
--- a/tools/autograd/gen_python_functions.py
+++ b/tools/autograd/gen_python_functions.py
@@ -18,7 +18,7 @@
'.*_backward', '.*_backward_(out|input|weight|bias)', '.*_forward',
'.*_forward_out', 'sparse_raw_resize_', '_unsafe_view', 'tensor',
'sparse_coo_tensor', '_arange.*', '_range.*', '_linspace.*', '_logspace.*',
- '_indexCopy_',
+ '_indexCopy_', 'max_values', 'min_values', 'argmax', 'argmin'
]
PY_VARIABLE_METHODS_CPP = CodeTemplate.from_file(template_path + '/python_variable_methods.cpp')
diff --git a/torch/functional.py b/torch/functional.py
--- a/torch/functional.py
+++ b/torch/functional.py
@@ -4,6 +4,8 @@
import math
__all__ = [
+ 'argmax',
+ 'argmin',
'bartlett_window',
'btrifact',
'btriunpack',
@@ -378,3 +380,75 @@ def unique(input, sorted=False, return_inverse=False):
return output, inverse_indices
else:
return output
+
+
+def argmax(input, dim=None, keepdim=False):
+ """Returns the indices of the maximum values of a tensor across a dimension.
+
+ This is the second value returned by :meth:`torch.max`. See its
+ documentation for the exact semantics of this method.
+
+ Args:
+ input (Tensor): the input tensor
+ dim (int): the dimension to reduce. If ``None``, the argmax of the
+ flattened input is returned.
+ keepdim (bool): whether the output tensors have :attr:`dim`
+ retained or not. Ignored if ``dim=None``.
+
+ Example::
+
+ >>> a = torch.randn(4, 4)
+ >>> a
+
+ 2.3461 0.0056 1.4846 0.3911
+ -1.3584 -1.0066 0.0530 1.1754
+ -0.7929 -0.3194 -1.4865 0.4020
+ 0.1101 0.6694 1.3456 0.8235
+ [torch.FloatTensor of size (4,4)]
+
+ >>> torch.argmax(a, dim=1)
+ 0
+ 3
+ 3
+ 2
+ [torch.LongTensor of size (4,)]
+ """
+ if dim is None:
+ return torch._argmax(input.contiguous().view(-1), dim=0, keepdim=False)
+ return torch._argmax(input, dim, keepdim)
+
+
+def argmin(input, dim=None, keepdim=False):
+ """Returns the indices of the minimum values of a tensor across a dimension.
+
+ This is the second value returned by :meth:`torch.min`. See its
+ documentation for the exact semantics of this method.
+
+ Args:
+ input (Tensor): the input tensor
+ dim (int): the dimension to reduce. If ``None``, the argmin of the
+ flattened input is returned.
+ keepdim (bool): whether the output tensors have :attr:`dim`
+ retained or not. Ignored if ``dim=None``.
+
+ Example::
+
+ >>> a = torch.randn(4, 4)
+ >>> a
+
+ 2.3461 0.0056 1.4846 0.3911
+ -1.3584 -1.0066 0.0530 1.1754
+ -0.7929 -0.3194 -1.4865 0.4020
+ 0.1101 0.6694 1.3456 0.8235
+ [torch.FloatTensor of size (4,4)]
+
+ >>> torch.argmin(a, dim=1)
+ 1
+ 0
+ 2
+ 0
+ [torch.LongTensor of size (4,)]
+ """
+ if dim is None:
+ return torch._argmin(input.contiguous().view(-1), dim=0, keepdim=False)
+ return torch._argmin(input, dim, keepdim)
diff --git a/torch/tensor.py b/torch/tensor.py
--- a/torch/tensor.py
+++ b/torch/tensor.py
@@ -214,6 +214,14 @@ def share_memory_(self):
def view_as(self, tensor):
return self.view(tensor.size())
+ def argmax(self, dim=None, keepdim=False):
+ r"""See :func:`torch.argmax`"""
+ return torch.argmax(self, dim, keepdim)
+
+ def argmin(self, dim=None, keepdim=False):
+ r"""See :func:`torch.argmin`"""
+ return torch.argmin(self, dim, keepdim)
+
def btrifact(self, info=None, pivot=True):
r"""See :func:`torch.btrifact`
"""
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -574,6 +574,28 @@ def _test_dim_reduction(self, cast):
self.assertEqual(x.min(0), (torch.FloatTensor([-1, 2, 1]), torch.FloatTensor([0, 0, 0])))
self.assertEqual(x.min(1), (torch.FloatTensor([-1, 3]), torch.FloatTensor([0, 1])))
+ for dtype in types:
+ if dtype == torch.uint8: # Doesn't support negative values
+ continue
+ x = cast(torch.tensor(example, dtype=dtype))
+ self.assertEqual(x.argmax().item(), 5)
+ self.assertEqual(x.argmax(dim=0), torch.FloatTensor([1, 1, 1]))
+ self.assertEqual(x.argmax(dim=1), torch.FloatTensor([1, 2]))
+ self.assertEqual(x.argmax(dim=0, keepdim=True), torch.FloatTensor([[1, 1, 1]]))
+ # test that non-contiguous tensors work
+ self.assertEqual(x[:, :2].argmax().item(), 2)
+
+ for dtype in types:
+ if dtype == torch.uint8: # Doesn't support negative values
+ continue
+ x = cast(torch.tensor(example, dtype=dtype))
+ self.assertEqual(x.argmin().item(), 0)
+ self.assertEqual(x.argmin(dim=0), torch.FloatTensor([0, 0, 0]))
+ self.assertEqual(x.argmin(dim=1), torch.FloatTensor([0, 1]))
+ self.assertEqual(x.argmin(dim=1, keepdim=True), torch.FloatTensor([[0], [1]]))
+ # test that non-contiguous tensors work
+ self.assertEqual(x[:, :2].argmin().item(), 0)
+
dim_red_fns = [
"mean", "median", "mode", "norm", "prod",
"std", "sum", "var", "max", "min"]
| [feature request] torch.argmax / torch.argmin
To save indexing into output of torch.max / torch.min and make code more readable. NumPy / TensorFlow also has it.
| How is this feature going? Has it been added?
@thecortex note that you can currently obtain the argmax as the second return value of `torch.max` when a dimension is specified. Same for argmin | 2018-04-02T20:37:28 |
pytorch/pytorch | 6,229 | pytorch__pytorch-6229 | [
"5719"
] | 4375dfd0b201e4efa2e19266fce8a953a2ca2061 | diff --git a/torch/multiprocessing/reductions.py b/torch/multiprocessing/reductions.py
--- a/torch/multiprocessing/reductions.py
+++ b/torch/multiprocessing/reductions.py
@@ -98,6 +98,10 @@ def rebuild_storage_cuda(cls, device, handle, size, offset, view_size):
return storage
+def rebuild_storage_empty(cls):
+ return cls()
+
+
def reduce_storage(storage):
from . import get_sharing_strategy
if storage.is_cuda:
@@ -109,6 +113,10 @@ def reduce_storage(storage):
cache_key = metadata[1]
rebuild = rebuild_storage_filename
storage._shared_incref()
+ elif storage.size() == 0:
+ # This is special cased because Empty tensors
+ # (with size 0) cannot be mmapped.
+ return (rebuild_storage_empty, (type(storage),))
else:
fd, size = storage._share_fd_()
if sys.version_info[0] == 2:
| diff --git a/test/test_multiprocessing.py b/test/test_multiprocessing.py
--- a/test/test_multiprocessing.py
+++ b/test/test_multiprocessing.py
@@ -371,6 +371,22 @@ def test_event(self):
self.assertEqual(list(tensor), [4, 4, 4, 4])
p.join()
+ def _test_empty_tensor_sharing(self, dtype):
+ q = mp.Queue()
+ empty = torch.tensor([], dtype=dtype)
+ q.put(empty)
+ out = q.get(timeout=1)
+ self.assertEqual(out, empty)
+
+ def test_empty_tensor_sharing(self):
+ self._test_empty_tensor_sharing(torch.float32)
+ self._test_empty_tensor_sharing(torch.int64)
+
+ @unittest.skipIf(not torch.cuda.is_available(), 'CUDA not available')
+ def test_empty_tensor_sharing_cuda(self):
+ self._test_empty_tensor_sharing(torch.cuda.float32)
+ self._test_empty_tensor_sharing(torch.cuda.int64)
+
def _test_autograd_sharing(self, var):
ready = mp.Event()
master_modified = mp.Event()
| Multiprocessing: sharing a zero-sized tensor fails
- PyTorch version: 0.4.0a0+c7611f7
Repro:
```python
import torch
import torch.multiprocessing as mp
q = mp.Queue()
t = torch.tensor([])
q.put(t)
```
Result:
```
File "/data/users/sgross/python3/lib/python3.5/multiprocessing/queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "/data/users/sgross/python3/lib/python3.5/multiprocessing/reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "/data/users/sgross/pytorch/torch/multiprocessing/reductions.py", line 117, in reduce_storage
df = multiprocessing.reduction.DupFd(fd)
File "/data/users/sgross/python3/lib/python3.5/multiprocessing/reduction.py", line 190, in DupFd
return resource_sharer.DupFd(fd)
File "/data/users/sgross/python3/lib/python3.5/multiprocessing/resource_sharer.py", line 48, in __init__
new_fd = os.dup(fd)
OSError: [Errno 9] Bad file descriptor
```
| Emm. I fail to reproduce this bug in Python 3.5.0 and Mac OS 10.13.3. What's your system environment?
The bug is probably specific to the `file_descriptor` sharing mode. Mac OS uses `filename` by default. | 2018-04-03T14:52:10 |
pytorch/pytorch | 6,307 | pytorch__pytorch-6307 | [
"1889"
] | 187955b9599c162f75d19ceb65fde02bd85253b3 | diff --git a/tools/autograd/gen_python_functions.py b/tools/autograd/gen_python_functions.py
--- a/tools/autograd/gen_python_functions.py
+++ b/tools/autograd/gen_python_functions.py
@@ -216,6 +216,7 @@ def create_python_bindings(python_functions, has_self, is_module=False):
'int64_t': 'toInt64',
'bool': 'toBool',
'double': 'toDouble',
+ 'std::string': 'string',
}
unpack_with_default_methods = {
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -1489,6 +1489,67 @@
- **v** (*Tensor*): the eigenvectors of ``a`` if ``eigenvectors`` is ``True``; otherwise an empty tensor
""")
+add_docstr(torch.einsum,
+ r"""
+einsum(equation, operands) -> Tensor
+
+This function provides a way of computing multilinear expressions (i.e. sums of products) using the
+Einstein summation convention.
+
+Args:
+ equation (string): The equation is given in terms of lower case letters (indices) to be associated
+ with each dimension of the operands and result. The left hand side lists the operands
+ dimensions, separated by commas. There should be one index letter per tensor dimension.
+ The right hand side follows after `->` and gives the indices for the output.
+ If the `->` and right hand side are omitted, it implicitly defined as the alphabetically
+ sorted list of all indices appearing exactly once in the left hand side.
+ The indices not apprearing in the output are summed over after multiplying the operands
+ entries.
+ `einsum` does not implement diagonals (multiple occurences of a single index for one tensor,
+ e.g. `ii->i`) and ellipses (`...`).
+ operands (list of Tensors): The operands to compute the Einstein sum of.
+ Note that the operands are passed as a list, not as individual arguments.
+
+Examples::
+
+ >>> x = torch.randn(5)
+ >>> y = torch.randn(4)
+ >>> torch.einsum('i,j->ij', (x,y)) # outer product
+
+ -1.0066 -2.0433 -0.8290 0.8429
+ -0.5106 -1.0365 -0.4205 0.4275
+ 0.4174 0.8473 0.3438 -0.3495
+ -0.4578 -0.9292 -0.3770 0.3833
+ -0.8996 -1.8262 -0.7409 0.7533
+ [torch.FloatTensor of size (5,4)]
+
+ >>> A = torch.randn(3,5,4)
+ >>> l = torch.randn(2,5)
+ >>> r = torch.randn(2,4)
+ >>> torch.einsum('bn,anm,bm->ba', (l,A,r)) # compare torch.nn.functional.bilinear
+
+ -1.3778 2.7663 -4.9150
+ -1.7813 -4.9015 2.4149
+ [torch.FloatTensor of size (2,3)]
+
+ >>> As = torch.randn(3,2,5)
+ >>> Bs = torch.randn(3,5,4)
+ >>> torch.einsum('bij,bjk->bik', (As, Bs)) # batch matrix multiplication
+
+ (0 ,.,.) =
+ -2.0810 4.7334 2.9593 0.5268
+ 1.8096 -4.6701 -2.4214 -2.2638
+
+ (1 ,.,.) =
+ 0.9456 -8.3309 -2.4690 -3.3164
+ 1.9580 -1.8447 -1.4268 -2.5414
+
+ (2 ,.,.) =
+ -0.1725 0.7317 -0.2110 -0.0522
+ 2.5407 -0.2854 3.8720 0.9073
+ [torch.FloatTensor of size (3,2,4)]
+""")
+
add_docstr(torch.eq,
r"""
eq(input, other, out=None) -> Tensor
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -1309,6 +1309,57 @@ def test_cmul(self):
def test_cpow(self):
self._test_cop(torch.pow, lambda x, y: float('nan') if x < 0 else math.pow(x, y))
+ @unittest.skipIf(not TEST_NUMPY, 'Numpy not found')
+ def test_einsum(self):
+ # test cases taken from https://gist.github.com/rockt/15ee013889d65342088e9260a377dc8f
+ x = torch.randn(5)
+ y = torch.randn(7)
+ A = torch.randn(3, 5)
+ B = torch.randn(2, 5)
+ C = torch.randn(2, 3, 5)
+ D = torch.randn(2, 5, 7)
+ E = torch.randn(7, 9)
+ F = torch.randn(2, 3, 5, 7)
+ G = torch.randn(7, 11, 13)
+ l = torch.randn(5, 10)
+ r = torch.randn(5, 20)
+ w = torch.randn(30, 10, 20)
+ test_list = [
+ # -- Vector
+ ("i->", x), # sum
+ ("i,i->", x, x), # dot
+ ("i,i->i", x, x), # vector element-wise mul
+ ("i,j->ij", x, y), # outer
+ # -- Matrix
+ ("ij->ji", A), # transpose
+ ("ij->j", A), # row sum
+ ("ij->i", A), # col sum
+ ("ij,ij->ij", A, A), # matrix element-wise mul
+ ("ij,j->i", A, x), # matrix vector multiplication
+ ("ij,kj->ik", A, B), # matmul
+ ("ij,ab->ijab", A, E), # matrix outer product
+ # -- Tensor
+ ("aij,ajk->aik", C, D), # batch matmul
+ ("ijk,jk->i", C, A), # tensor matrix contraction
+ ("aij,jk->aik", D, E), # tensor matrix contraction
+ ("abcd,dfg->abcfg", F, G), # tensor tensor contraction
+ ("ijk,jk->ik", C, A), # tensor matrix contraction with double indices
+ ("ijk,jk->ij", C, A), # tensor matrix contraction with double indices
+ ("ijk,ik->j", C, B), # non contiguous
+ ("ijk,ik->jk", C, B), # non contiguous with double indices
+ # -- Other
+ ("bn,anm,bm->ba", l, w, r), # as torch.bilinear
+ ]
+ for test in test_list:
+ actual = torch.einsum(test[0], test[1:])
+ expected = np.einsum(test[0], *[t.numpy() for t in test[1:]])
+ self.assertEqual(expected.shape, actual.shape)
+ self.assertTrue(np.allclose(expected, actual.numpy()))
+
+ def do_einsum(*args):
+ return torch.einsum(test[0], args)
+ self.assertTrue(torch.autograd.gradcheck(do_einsum, test[1:]))
+
def test_sum_all(self):
def check_sum_all(tensor):
pylist = tensor.reshape(-1).tolist()
| Support for einsum notation
As mentioned [here](https://discuss.pytorch.org/t/einstein-summation-in-pytorch/2492), support for [einsum notation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html) would be useful. It's in fact a feature that I am missing most after transitioning from TensorFlow. What's needed for the implementation is parsing the einsum string and deciding which PyTorch modules to use for which cases as well as automatically reshaping input tensor back and forth like done [here](https://github.com/tensorflow/tensorflow/blob/r1.2/tensorflow/python/ops/special_math_ops.py#L85).
| Yeah, Pytorch lacks this. I can implement the feature
@vlasenkov that'd be great. please go ahead.
Any progress @vlasenkov ?
@MadcowD unfortunately I'm not able to contribute now, you can check the PR and complete the feature | 2018-04-05T09:51:25 |
pytorch/pytorch | 6,327 | pytorch__pytorch-6327 | [
"6299"
] | 99939b6d90cece8046f95d5e345c421c35a46176 | diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py
--- a/torch/nn/parallel/data_parallel.py
+++ b/torch/nn/parallel/data_parallel.py
@@ -52,8 +52,22 @@ class DataParallel(Module):
.. warning::
Forward and backward hooks defined on :attr:`module` and its submodules
- won't be invoked anymore, unless the hooks are initialized in the
- :meth:`forward` method.
+ will be invoked ``len(device_ids)`` times, each with inputs located on
+ a particular device. Particularly, the hooks are only guaranteed to be
+ executed in correct order with respect to operations on corresponding
+ devices. For example, it is not guaranteed that hooks set via
+ :meth:`~torch.nn.Module.register_forward_pre_hook` be executed before
+ `all` ``len(device_ids)`` :meth:`~torch.nn.Module.forward` calls, but
+ that each such hook be executed before the corresponding
+ :meth:`~torch.nn.Module.forward` call of that device.
+
+ .. note::
+ There is a subtlety in using the
+ ``pack sequence -> recurrent network -> unpack sequence`` pattern in a
+ :class:`~torch.nn.Module` wrapped in :class:`~torch.nn.DataParallel`.
+ See :ref:`this FAQ section <pack-rnn-unpack-with-data-parallelism>` for
+ details.
+
Args:
module: module to be parallelized
diff --git a/torch/nn/utils/rnn.py b/torch/nn/utils/rnn.py
--- a/torch/nn/utils/rnn.py
+++ b/torch/nn/utils/rnn.py
@@ -11,7 +11,7 @@
class PackedSequence(PackedSequence_):
- r"""Holds the data and list of batch_sizes of a packed sequence.
+ r"""Holds the data and list of :attr:`batch_sizes` of a packed sequence.
All RNN modules accept packed sequences as inputs.
@@ -21,8 +21,9 @@ class PackedSequence(PackedSequence_):
Batch sizes represent the number elements at each sequence step in
the batch, not the varying sequence lengths passed to
- :func:`pack_padded_sequence`. For instance, given data ``abc`` and `d`
- the ``PackedSequence`` would be ``adbc`` with ``batch_sizes=[2,1,1]``.
+ :func:`pack_padded_sequence`. For instance, given data ``abc`` and `x`
+ the :class:`PackedSequence` would contain data ``axbc`` with
+ ``batch_sizes=[2,1,1]``.
Attributes:
data (Variable): Variable containing packed sequence
@@ -136,7 +137,9 @@ def pack_padded_sequence(input, lengths, batch_first=False):
return PackedSequence(data, batch_sizes)
-def _symbolic_pad_packed_sequence(g, input, batch_first=False, padding_value=0.0):
+def _symbolic_pad_packed_sequence(g, input, batch_first=False, padding_value=0.0, total_length=None):
+ if total_length is not None:
+ raise ValueError("_symbolic_pad_packed_sequence only supports total_length=None")
# See comment on _symbolic_pack_padded_sequence
data, lengths = g.op("prim::PadPacked", input.data, input.batch_sizes, outputs=2)
if batch_first:
@@ -145,7 +148,7 @@ def _symbolic_pad_packed_sequence(g, input, batch_first=False, padding_value=0.0
@torch.onnx.symbolic_override_packed_sequence_based(_symbolic_pad_packed_sequence)
-def pad_packed_sequence(sequence, batch_first=False, padding_value=0):
+def pad_packed_sequence(sequence, batch_first=False, padding_value=0.0, total_length=None):
r"""Pads a packed batch of variable length sequences.
It is an inverse operation to :func:`pack_padded_sequence`.
@@ -156,11 +159,22 @@ def pad_packed_sequence(sequence, batch_first=False, padding_value=0):
Batch elements will be ordered decreasingly by their length.
+ .. note::
+ :attr:`total_length` is useful to implement the
+ ``pack sequence -> recurrent network -> unpack sequence`` pattern in a
+ :class:`~torch.nn.Module` wrapped in :class:`~torch.nn.DataParallel`.
+ See :ref:`this FAQ section <pack-rnn-unpack-with-data-parallelism>` for
+ details.
+
Arguments:
sequence (PackedSequence): batch to pad
batch_first (bool, optional): if ``True``, the output will be in ``B x T x *``
format.
padding_value (float, optional): values for padded elements.
+ total_length (int, optional): if not ``None``, the output will be padded to
+ have length :attr:`total_length`. This method will throw :class:`ValueError`
+ if :attr:`total_length` is less than the max sequence length in
+ :attr:`sequence`.
Returns:
Tuple of Variable containing the padded sequence, and Variable
@@ -169,7 +183,15 @@ def pad_packed_sequence(sequence, batch_first=False, padding_value=0):
"""
var_data, batch_sizes = sequence
max_batch_size = int(batch_sizes[0])
- output = var_data.data.new(len(batch_sizes), max_batch_size, *var_data.size()[1:]).fill_(padding_value)
+ max_seq_length = batch_sizes.size(0)
+ if total_length is not None:
+ if total_length < max_seq_length:
+ raise ValueError("Expected total_length to be at least the length "
+ "of the longest sequence in input, but got "
+ "total_length={} and max sequence length being {}"
+ .format(total_length, max_seq_length))
+ max_seq_length = total_length
+ output = var_data.data.new(max_seq_length, max_batch_size, *var_data.size()[1:]).fill_(padding_value)
output = Variable(output)
lengths = []
diff --git a/torch/onnx/__init__.py b/torch/onnx/__init__.py
--- a/torch/onnx/__init__.py
+++ b/torch/onnx/__init__.py
@@ -124,7 +124,12 @@ def symbolic_override_first_arg_based(symbolic_fn):
def might_trace(args):
import torch
- return torch._C._jit_is_tracing(args[0])
+ first_arg = args[0]
+ if not torch.is_tensor(first_arg):
+ raise ValueError('First argument of {} is expected to be a tensor, '
+ 'but got an object of type {}'
+ .format(symbolic_fn.__name__, type(first_arg)))
+ return torch._C._jit_is_tracing(first_arg)
return functools.partial(_symbolic_override_wrapper_maker, symbolic_fn, might_trace)
@@ -140,6 +145,11 @@ def symbolic_override_packed_sequence_based(symbolic_fn):
def might_trace(args):
import torch
- return torch._C._jit_is_tracing(args[0][0])
+ first_arg = args[0]
+ if not isinstance(first_arg, torch.nn.utils.rnn.PackedSequence):
+ raise ValueError('pad_packed_sequence expects sequence to be a '
+ 'PackedSequence, but got an object of type {}'
+ .format(type(first_arg)))
+ return torch._C._jit_is_tracing(first_arg[0])
return functools.partial(_symbolic_override_wrapper_maker, symbolic_fn, might_trace)
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -128,6 +128,39 @@ def test_cuda_mask(self):
unpacked, _ = rnn_utils.pad_packed_sequence(packed)
self.assertEqual(unpacked.type(), cuda_type_str)
+ def test_total_length(self):
+ padded, lengths = self._padded_sequence(torch.FloatTensor)
+ max_length = max(lengths)
+ packed = rnn_utils.pack_padded_sequence(padded, lengths)
+ # test ValueError if total_length < max_length
+ for total_length in (-1, 0, max_length - 1):
+ for batch_first in (True, False):
+ def err_fn():
+ rnn_utils.pad_packed_sequence(packed, batch_first=batch_first,
+ total_length=total_length)
+ self.assertRaisesRegex(ValueError,
+ r'Expected total_length to be at least the '
+ r'length of the longest sequence in input',
+ err_fn)
+ # test that pad_packed_sequence returns results of correct length
+ for batch_first in (True, False):
+ no_extra_pad, _ = rnn_utils.pad_packed_sequence(packed, batch_first=batch_first)
+ for total_length_delta in (0, 1, 8):
+ total_length = max_length + total_length_delta
+ unpacked, lengths_out = rnn_utils.pad_packed_sequence(packed, batch_first=batch_first,
+ total_length=total_length)
+ self.assertEqual(lengths, lengths_out)
+ self.assertEqual(unpacked.size(1 if batch_first else 0), total_length)
+ if total_length_delta == 0:
+ ref_output = no_extra_pad
+ elif batch_first:
+ extra_pad = no_extra_pad.new_zeros(self.batch_size, total_length_delta)
+ ref_output = torch.cat([no_extra_pad, extra_pad], 1)
+ else:
+ extra_pad = no_extra_pad.new_zeros(total_length_delta, self.batch_size)
+ ref_output = torch.cat([no_extra_pad, extra_pad], 0)
+ self.assertEqual(unpacked, ref_output)
+
def default_tensor_type(type):
type_str = torch.typename(type)
| [feature request] make pad_packed_sequence work in DataParallel
Currently users can not do `pack -> RNN -> unpack` in a module wrapped in `DataParallel` because the unpack operation (`pad_packed_sequence`) will only pad up to the longest input it sees, i.e., the longest on that particular device. Then the code breaks when it tries to gather the results into a single tensor afterwards. I propose to add a `total_length` option to `pad_packed_sequence` so that it can beyond the longest input. Then the following pattern should work with DataParallel:
```python
def forward(self, input, input_lengths):
total_length = input.size(-1)
packed_input = nn.utils.rnn.pack_padded_sequence(input, input_lengths)
packed_output = self.lstm(packed_input)
output, _ = nn.utils.rnn.pad_packed_sequence(packed_output, total_length=total_length)
return output
```
The current suggested workaround is to manually add the padding. However, it becomes tricky when one wants to do ONNX export.
Relevant post: https://discuss.pytorch.org/t/question-about-packed-rnn-with-dataparallel/2738
cc @goldsborough
| @apaszke do you think that this is reasonable? :) | 2018-04-05T22:21:12 |
pytorch/pytorch | 6,367 | pytorch__pytorch-6367 | [
"5677"
] | e45b51148a8f4cafd0716a735a301bba850755be | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -125,7 +125,7 @@
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
+html_static_path = ['_static', '_images']
# html_style_path = 'css/pytorch_theme.css'
html_context = {
diff --git a/docs/source/scripts/build_activation_images.py b/docs/source/scripts/build_activation_images.py
--- a/docs/source/scripts/build_activation_images.py
+++ b/docs/source/scripts/build_activation_images.py
@@ -15,14 +15,11 @@
# Create a directory for the images, if it doesn't exist
-DOCS_PATH = os.path.realpath(os.path.join(__file__, "../../.."))
ACTIVATION_IMAGE_PATH = os.path.join(
- DOCS_PATH,
- "source/_static/img/activation/"
+ os.path.realpath(os.path.join(__file__, "..")),
+ "activation_images"
)
-print(ACTIVATION_IMAGE_PATH)
-
if not os.path.exists(ACTIVATION_IMAGE_PATH):
os.mkdir(ACTIVATION_IMAGE_PATH)
diff --git a/torch/nn/modules/activation.py b/torch/nn/modules/activation.py
--- a/torch/nn/modules/activation.py
+++ b/torch/nn/modules/activation.py
@@ -56,7 +56,7 @@ class ReLU(Threshold):
r"""Applies the rectified linear unit function element-wise
:math:`\text{ReLU}(x)= \max(0, x)`
- .. image:: _static/img/activation/ReLU.png
+ .. image:: scripts/activation_images/ReLU.png
Args:
inplace: can optionally do the operation in-place. Default: ``False``
@@ -147,7 +147,7 @@ class Hardtanh(Module):
The range of the linear region :math:`[-1, 1]` can be adjusted using
:attr:`min_val` and :attr:`max_val`.
- .. image:: _static/img/activation/Hardtanh.png
+ .. image:: scripts/activation_images/Hardtanh.png
Args:
min_val: minimum value of the linear region range. Default: -1
@@ -204,7 +204,7 @@ class ReLU6(Hardtanh):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/ReLU6.png
+ .. image:: scripts/activation_images/ReLU6.png
Examples::
@@ -229,7 +229,7 @@ class Sigmoid(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Sigmoid.png
+ .. image:: scripts/activationscripts/activation_images/Sigmoid.png
Examples::
@@ -251,7 +251,7 @@ class Tanh(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Tanh.png
+ .. image:: scripts/activation_images/Tanh.png
Examples::
@@ -277,7 +277,7 @@ class ELU(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/ELU.png
+ .. image:: scripts/activation_images/ELU.png
Examples::
@@ -305,7 +305,7 @@ class SELU(Module):
with :math:`\alpha = 1.6732632423543772848170429916717` and
:math:`\text{scale} = 1.0507009873554804934193349852946`.
- .. image:: _static/img/activation/SELU.png
+ .. image:: scripts/activation_images/SELU.png
More details can be found in the paper `Self-Normalizing Neural Networks`_ .
@@ -389,7 +389,7 @@ class Hardshrink(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Hardshrink.png
+ .. image:: scripts/activation_images/Hardshrink.png
Examples::
@@ -429,7 +429,7 @@ class LeakyReLU(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/LeakyReLU.png
+ .. image:: scripts/activation_images/LeakyReLU.png
Examples::
@@ -459,7 +459,7 @@ class LogSigmoid(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/LogSigmoid.png
+ .. image:: scripts/activation_images/LogSigmoid.png
Examples::
@@ -490,7 +490,7 @@ class Softplus(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Softplus.png
+ .. image:: scripts/activation_images/Softplus.png
Examples::
@@ -532,7 +532,7 @@ class Softshrink(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Softshrink.png
+ .. image:: scripts/activation_images/Softshrink.png
Examples::
@@ -580,7 +580,7 @@ class PReLU(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/PReLU.png
+ .. image:: scripts/activation_images/PReLU.png
Examples::
@@ -609,7 +609,7 @@ class Softsign(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Softsign.png
+ .. image:: scripts/activation_images/Softsign.png
Examples::
@@ -630,7 +630,7 @@ class Tanhshrink(Module):
dimensions
- Output: :math:`(N, *)`, same shape as the input
- .. image:: _static/img/activation/Tanhshrink.png
+ .. image:: scripts/activation_images/Tanhshrink.png
Examples::
| Activation doc image broken
e.g. http://pytorch.org/docs/master/nn.html#torch.nn.ELU
@pmitros
| Thank you. Hmmm... It works fine locally. I have time to debug or help out over this weekend, but I'm not quite sure how. I don't really understand (or likely have access to) the deployment infrastructure which builds to pytorch.org. If you have any pointers, or know of a way I can do to help out, please let me know.
also issues with http://pytorch.org/docs/master/nn.html?highlight=shrink#torch.nn.functional.tanhshrink and http://pytorch.org/docs/master/nn.html?highlight=shrink#torch.nn.Softshrink https://github.com/pytorch/pytorch/issues/4819 | 2018-04-06T22:29:49 |
|
pytorch/pytorch | 6,396 | pytorch__pytorch-6396 | [
"6386"
] | 67bbf585cda43fc63a8450421aaef24e2f7b3501 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -293,6 +293,15 @@ def check_file(f):
libs += ['THD']
build_libs(libs)
+ # Use copies instead of symbolic files.
+ # Windows has very poor support for them.
+ sym_files = ['tools/shared/cwrap_common.py']
+ orig_files = ['aten/src/ATen/common_with_cwrap.py']
+ for sym_file, orig_file in zip(sym_files, orig_files):
+ if os.path.exists(sym_file):
+ os.remove(sym_file)
+ shutil.copyfile(orig_file, sym_file)
+
# Copy headers necessary to compile C++ extensions.
#
# This is not perfect solution as build does not depend on any of
diff --git a/tools/shared/cwrap_common.py b/tools/shared/cwrap_common.py
deleted file mode 120000
--- a/tools/shared/cwrap_common.py
+++ /dev/null
@@ -1 +0,0 @@
-../../aten/src/ATen/common_with_cwrap.py
\ No newline at end of file
| Make Windows build stop clobbering tools/shared/cwrap_common.py
Because this file is implemented as a symlink, in Windows we copy the source over it as part of the build process. This dirties the working copy, and is generally annoying. It would be good to work out a way to make this not happen.
CC @peterjc123
| How could this be done? Copy this file in the `setup.py` script and add it to `.gitignore`? | 2018-04-08T05:30:24 |
|
pytorch/pytorch | 6,425 | pytorch__pytorch-6425 | [
"6312"
] | a91c88a34835ad79b80faf0a60f01d5cc0692c41 | diff --git a/torch/utils/bottleneck/__main__.py b/torch/utils/bottleneck/__main__.py
--- a/torch/utils/bottleneck/__main__.py
+++ b/torch/utils/bottleneck/__main__.py
@@ -218,7 +218,7 @@ def parse_args():
parser.add_argument('scriptfile', type=str,
help='Path to the script to be run. '
'Usually run with `python path/to/script`.')
- parser.add_argument('args', type=str, nargs='*',
+ parser.add_argument('args', type=str, nargs=argparse.REMAINDER,
help='Command-line arguments to be passed to the script.')
return parser.parse_args()
| diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -406,12 +406,9 @@ def _run_bottleneck(self, test_file, scriptargs=''):
curdir = os.path.dirname(os.path.abspath(__file__))
filepath = '{}/{}'.format(curdir, test_file)
if scriptargs != '':
- mark = '-- '
scriptargs = ' {}'.format(scriptargs)
- else:
- mark = ''
rc, out, err = self._run(
- 'python -m torch.utils.bottleneck {}{}{}'.format(mark, filepath, scriptargs))
+ 'python -m torch.utils.bottleneck {}{}'.format(filepath, scriptargs))
return rc, out, err
def _check_run_args(self):
@@ -463,7 +460,7 @@ def _check_cuda(self, output):
self.assertIsNone(results, self._fail_msg('Should not tell users about CUDA', output))
@unittest.skipIf(torch.cuda.is_available(), 'CPU-only test')
- def test_cpu_only(self):
+ def test_bottleneck_cpu_only(self):
rc, out, err = self._run_bottleneck('bottleneck/test.py')
self.assertEqual(rc, 0, 'Run failed with\n{}'.format(err))
@@ -475,7 +472,7 @@ def test_cpu_only(self):
@unittest.skipIf(IS_WINDOWS, "FIXME: Intermittent CUDA out-of-memory error")
@unittest.skipIf(not torch.cuda.is_available(), 'No CUDA')
- def test_cuda(self):
+ def test_bottleneck_cuda(self):
rc, out, err = self._run_bottleneck('bottleneck/test_cuda.py')
self.assertEqual(rc, 0, 'Run failed with\n{}'.format(err))
| [utils.bottleneck] Handle user script arguments better
Right now,
`python -m torch.utils.bottleneck script.py arg1 arg2 arg2`
fails with a cryptic errors to the user as their user script doesn't receive the args correctly. The recommended usage right now is `python -m torch.utils.bottleneck -- script.py arg1 arg2 arg2`.
One of the following should be expected:
- bottleneck prints out a nice error message politely telling the user they've omitted the '--'
- bottleneck should be able to run without the '--'
I'm working on fixing this
cc @fmassa for the original report
| 2018-04-09T16:22:08 |
|
pytorch/pytorch | 6,490 | pytorch__pytorch-6490 | [
"6002"
] | f70146e922873ddb4b8c2b70fe6f2ff81a8fb35d | diff --git a/torch/serialization.py b/torch/serialization.py
--- a/torch/serialization.py
+++ b/torch/serialization.py
@@ -124,8 +124,22 @@ def _with_file_like(f, mode, body):
f.close()
-def _is_real_file(f):
- """Checks if f is backed by a real file (has a fileno)"""
+def _is_compressed_file(f):
+ compress_modules = ['gzip']
+ try:
+ return f.__module__ in compress_modules
+ except AttributeError:
+ return False
+
+
+def _should_read_directly(f):
+ """
+ Checks if f is a file that should be read directly. It should be read
+ directly if it is backed by a real file (has a fileno) and is not a
+ a compressed file (e.g. gzip)
+ """
+ if _is_compressed_file(f):
+ return False
try:
return f.fileno() >= 0
except io.UnsupportedOperation:
@@ -238,7 +252,7 @@ def persistent_id(obj):
pickle_module.dump(serialized_storage_keys, f, protocol=pickle_protocol)
f.flush()
for key in serialized_storage_keys:
- serialized_storages[key]._write_file(f, _is_real_file(f))
+ serialized_storages[key]._write_file(f, _should_read_directly(f))
def load(f, map_location=None, pickle_module=pickle):
@@ -452,8 +466,8 @@ def persistent_load(saved_id):
else:
raise RuntimeError("Unknown saved id type: %s" % saved_id[0])
- f_is_real_file = _is_real_file(f)
- if f_is_real_file and f.tell() == 0:
+ f_should_read_directly = _should_read_directly(f)
+ if f_should_read_directly and f.tell() == 0:
# legacy_load requires that f has fileno()
# only if offset is zero we can attempt the legacy tar file loader
try:
@@ -476,10 +490,10 @@ def persistent_load(saved_id):
deserialized_storage_keys = pickle_module.load(f)
- offset = f.tell() if f_is_real_file else None
+ offset = f.tell() if f_should_read_directly else None
for key in deserialized_storage_keys:
assert key in deserialized_objects
- deserialized_objects[key]._set_from_file(f, offset, f_is_real_file)
+ deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
offset = None
return result
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -5,12 +5,14 @@
import random
import operator
import copy
+import shutil
import torch
import torch.cuda
import tempfile
import unittest
import warnings
import pickle
+import gzip
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch._utils import _rebuild_tensor
from itertools import product, combinations
@@ -6168,7 +6170,7 @@ def test_parsing_intlist(self):
self.assertRaises(TypeError, lambda: torch.ones(np.array(3, 3)))
self.assertRaises(TypeError, lambda: torch.ones((np.array(3, 3))))
- def _test_serialization(self, filecontext_lambda, test_use_filename=True):
+ def _test_serialization_data(self):
a = [torch.randn(5, 5).float() for i in range(2)]
b = [a[i % 2] for i in range(4)]
b += [a[0].storage()]
@@ -6178,68 +6180,115 @@ def _test_serialization(self, filecontext_lambda, test_use_filename=True):
t2 = torch.FloatTensor().set_(a[0].storage()[1:4], 0, (3,), (1,))
b += [(t1.storage(), t1.storage(), t2.storage())]
b += [a[0].storage()[0:2]]
- if test_use_filename:
- use_name_options = (False, True)
- else:
- use_name_options = (False,)
- for use_name in use_name_options:
+ return b
+
+ def _test_serialization_assert(self, b, c):
+ self.assertEqual(b, c, 0)
+ self.assertTrue(isinstance(c[0], torch.FloatTensor))
+ self.assertTrue(isinstance(c[1], torch.FloatTensor))
+ self.assertTrue(isinstance(c[2], torch.FloatTensor))
+ self.assertTrue(isinstance(c[3], torch.FloatTensor))
+ self.assertTrue(isinstance(c[4], torch.FloatStorage))
+ c[0].fill_(10)
+ self.assertEqual(c[0], c[2], 0)
+ self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), 0)
+ c[1].fill_(20)
+ self.assertEqual(c[1], c[3], 0)
+ self.assertEqual(c[4][1:4], c[5], 0)
+
+ # check that serializing the same storage view object unpickles
+ # it as one object not two (and vice versa)
+ views = c[7]
+ self.assertEqual(views[0]._cdata, views[1]._cdata)
+ self.assertEqual(views[0], views[2])
+ self.assertNotEqual(views[0]._cdata, views[2]._cdata)
+
+ rootview = c[8]
+ self.assertEqual(rootview.data_ptr(), c[0].data_ptr())
+
+ def test_serialization(self):
+ # Test serialization with a real file
+ b = self._test_serialization_data()
+ for use_name in (False, True):
# Passing filename to torch.save(...) will cause the file to be opened twice,
# which is not supported on Windows
if sys.platform == "win32" and use_name:
continue
- with filecontext_lambda() as f:
+ with tempfile.NamedTemporaryFile() as f:
handle = f if not use_name else f.name
torch.save(b, handle)
f.seek(0)
c = torch.load(handle)
- self.assertEqual(b, c, 0)
- self.assertTrue(isinstance(c[0], torch.FloatTensor))
- self.assertTrue(isinstance(c[1], torch.FloatTensor))
- self.assertTrue(isinstance(c[2], torch.FloatTensor))
- self.assertTrue(isinstance(c[3], torch.FloatTensor))
- self.assertTrue(isinstance(c[4], torch.FloatStorage))
- c[0].fill_(10)
- self.assertEqual(c[0], c[2], 0)
- self.assertEqual(c[4], torch.FloatStorage(25).fill_(10), 0)
- c[1].fill_(20)
- self.assertEqual(c[1], c[3], 0)
- self.assertEqual(c[4][1:4], c[5], 0)
-
- # check that serializing the same storage view object unpickles
- # it as one object not two (and vice versa)
- views = c[7]
- self.assertEqual(views[0]._cdata, views[1]._cdata)
- self.assertEqual(views[0], views[2])
- self.assertNotEqual(views[0]._cdata, views[2]._cdata)
-
- rootview = c[8]
- self.assertEqual(rootview.data_ptr(), c[0].data_ptr())
-
- def test_serialization(self):
- # Test serialization with a real file
- self._test_serialization(tempfile.NamedTemporaryFile)
+ self._test_serialization_assert(b, c)
def test_serialization_filelike(self):
# Test serialization (load and save) with a filelike object
- self._test_serialization(BytesIOContext, test_use_filename=False)
+ b = self._test_serialization_data()
+ with BytesIOContext() as f:
+ torch.save(b, f)
+ f.seek(0)
+ c = torch.load(f)
+ self._test_serialization_assert(b, c)
+
+ def test_serialization_gzip(self):
+ # Test serialization with gzip file
+ b = self._test_serialization_data()
+ f1 = tempfile.NamedTemporaryFile(delete=False)
+ f2 = tempfile.NamedTemporaryFile(delete=False)
+ torch.save(b, f1)
+ with open(f1.name, 'rb') as f_in, gzip.open(f2.name, 'wb') as f_out:
+ shutil.copyfileobj(f_in, f_out)
+
+ with gzip.open(f2.name, 'rb') as f:
+ c = torch.load(f)
+ self._test_serialization_assert(b, c)
+
+ def test_serialization_offset(self):
+ a = torch.randn(5, 5)
+ i = 41
+ for use_name in (False, True):
+ # Passing filename to torch.save(...) will cause the file to be opened twice,
+ # which is not supported on Windows
+ if sys.platform == "win32" and use_name:
+ continue
+ with tempfile.NamedTemporaryFile() as f:
+ handle = f if not use_name else f.name
+ pickle.dump(i, f)
+ torch.save(a, f)
+ f.seek(0)
+ j = pickle.load(f)
+ b = torch.load(f)
+ self.assertTrue(torch.equal(a, b))
+ self.assertEqual(i, j)
- def _test_serialization_offset(self, filecontext_lambda):
+ def test_serialization_offset_filelike(self):
a = torch.randn(5, 5)
i = 41
- with tempfile.TemporaryFile() as f:
+ with BytesIOContext() as f:
pickle.dump(i, f)
torch.save(a, f)
f.seek(0)
j = pickle.load(f)
b = torch.load(f)
- self.assertTrue(torch.equal(a, b))
- self.assertEqual(i, j)
+ self.assertTrue(torch.equal(a, b))
+ self.assertEqual(i, j)
- def test_serialization_offset(self):
- self._test_serialization_offset(tempfile.TemporaryFile)
+ def test_serialization_offset_gzip(self):
+ a = torch.randn(5, 5)
+ i = 41
+ f1 = tempfile.NamedTemporaryFile(delete=False)
+ f2 = tempfile.NamedTemporaryFile(delete=False)
+ with open(f1.name, 'wb') as f:
+ pickle.dump(i, f)
+ torch.save(a, f)
+ with open(f1.name, 'rb') as f_in, gzip.open(f2.name, 'wb') as f_out:
+ shutil.copyfileobj(f_in, f_out)
- def test_serialization_offset_filelike(self):
- self._test_serialization_offset(BytesIOContext)
+ with gzip.open(f2.name, 'rb') as f:
+ j = pickle.load(f)
+ b = torch.load(f)
+ self.assertTrue(torch.equal(a, b))
+ self.assertEqual(i, j)
def test_half_tensor(self):
x = torch.randn(5, 5).float()
| Error loading gzipped weights
I'm trying to compress the weights of a network using gzip. Here is a MWE:
```
import torch, shutil, gzip
import torchvision.models as models
resnet18 = models.resnet18()
torch.save(resnet18.state_dict(), 'test.pt')
with open('test.pt', 'rb') as f_in, gzip.open('test.pt.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
#f_out.write(f_in.read())
with gzip.open('test.pt.gz', 'rb') as f:
state_dict = torch.load(f)
```
When loading the compressed file I get the error:
```
File "/home/user/.virtualenvs/py3/lib/python3.5/site-packages/torch/serialization.py", line 267, in load
return _load(f, map_location, pickle_module)
File "/home/user/.virtualenvs/py3/lib/python3.5/site-packages/torch/serialization.py", line 428, in _load
deserialized_objects[key]._set_from_file(f, offset)
RuntimeError: storage has wrong size: expected -772918636240159923 got 64
```
Even if I create the file using `f_out.write(f_in.read())` I still get the same error. The `test.pt` is identical to `test.pt.gz` when I compare them using `f.read()`. It works if I open the uncompressed file:
```
with open('test.pt', 'rb') as f:
state_dict = torch.load(f)
```
The expected size is sometimes negative, which leads me to believe it could be some sort of underflow, but it also changes each time it's run, so it must also be related to the randomly generated weights.
I'm using:
- OS: ubuntu 16.04
- PyTorch version: 0.3.1
- Installed via: conda
- Python version: 3.5.2
| Try this (it will only work on master):
```
import torch, shutil, gzip
import torchvision.models as models
import io
resnet18 = models.resnet18()
torch.save(resnet18.state_dict(), 'test.pt')
with open('test.pt', 'rb') as f_in, gzip.open('test.pt.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
#f_out.write(f_in.read())
with gzip.open('test.pt.gz', 'rb') as f:
# Use an intermediate buffer
x = io.BytesIO(f.read())
state_dict = torch.load(x)
```
I think the underlying problem is that `torch.load()` is bypassing gzip and directly reading the compressed file without uncompressing it. A workaround is to unzip the file into a new file and then load that file. Not sure how feasible it is to fix `torch.load` to support this behavior.
Eh, I think we could do hacks like inspecting `f`'s `__module__` and `__name__` (to avoid importing `gzip`), and maintain a list of commonly used file-like objects that expose `fileno`, but you shouldn't really use it... | 2018-04-10T23:39:00 |
pytorch/pytorch | 6,635 | pytorch__pytorch-6635 | [
"6111"
] | 2a628ba32f3e71f1fc6b31383e46e9b09db9abd6 | diff --git a/torch/utils/bottleneck/__main__.py b/torch/utils/bottleneck/__main__.py
--- a/torch/utils/bottleneck/__main__.py
+++ b/torch/utils/bottleneck/__main__.py
@@ -9,67 +9,16 @@
import torch
from torch.autograd import profiler
-
-PY3 = sys.version_info >= (3, 0)
-
-
-def run(command):
- """Returns (return-code, stdout, stderr)"""
- p = subprocess.Popen(command, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, shell=True)
- output, err = p.communicate()
- rc = p.returncode
- if PY3:
- output = output.decode("ascii")
- err = err.decode("ascii")
- return (rc, output, err)
+from torch.utils.collect_env import get_env_info
def redirect_argv(new_argv):
sys.argv[:] = new_argv[:]
-def check_running_cuda_version():
- (rc, out, err) = run('nvcc --version')
- if rc is not 0:
- return None
- m = re.search(r'V(.*)$', out)
- assert m is not None
- return m.group(1)
-
-
-def check_pip_packages():
- # People generally have `pip` as `pip` or `pip3`
- def run_with_pip(pip):
- rc, out, _ = run(pip + ' list --format=legacy | grep torch')
- if rc is 0:
- return out
- return None
-
- if not PY3:
- return 'pip', run_with_pip('pip')
-
- # Try to figure out if the user is running pip or pip3.
- out2 = run_with_pip('pip')
- out3 = run_with_pip('pip3')
-
- num_pips = len([x for x in [out2, out3] if x is not None])
- if num_pips is 0:
- return 'pip', out2
-
- if num_pips == 1:
- if out2 is not None:
- return 'pip', out2
- return 'pip3', out3
-
- # num_pips is 2. Return pip3 by default b/c that most likely
- # is the one associated with Python 3
- return 'pip3', out3
-
-
-def compiled_with_cuda():
- if torch.version.cuda:
- return 'compiled w/ CUDA {}'.format(torch.version.cuda)
+def compiled_with_cuda(sysinfo):
+ if sysinfo.cuda_compiled_version:
+ return 'compiled w/ CUDA {}'.format(sysinfo.cuda_compiled_version)
return 'not compiled w/ CUDA'
@@ -87,28 +36,31 @@ def compiled_with_cuda():
def run_env_analysis():
print('Running environment analysis...')
+ info = get_env_info()
+
result = []
debug_str = ''
- if torch.version.debug:
+ if info.is_debug_build:
debug_str = ' DEBUG'
cuda_avail = ''
- if torch.cuda.is_available():
- cuda = check_running_cuda_version()
+ if info.is_cuda_available:
+ cuda = info.cuda_runtime_version
if cuda is not None:
cuda_avail = 'CUDA ' + cuda
else:
cuda = 'CUDA unavailable'
- pip_version, pip_list_output = check_pip_packages()
+ pip_version = info.pip_version
+ pip_list_output = info.pip_packages
if pip_list_output is None:
pip_list_output = 'Unable to fetch'
result = {
'debug_str': debug_str,
- 'pytorch_version': torch.__version__,
- 'cuda_compiled': compiled_with_cuda(),
+ 'pytorch_version': info.torch_version,
+ 'cuda_compiled': compiled_with_cuda(info),
'py_version': '{}.{}'.format(sys.version_info[0], sys.version_info[1]),
'cuda_runtime': cuda_avail,
'pip_version': pip_version,
diff --git a/torch/utils/collect_env.py b/torch/utils/collect_env.py
new file mode 100644
--- /dev/null
+++ b/torch/utils/collect_env.py
@@ -0,0 +1,327 @@
+# This script outputs relevant system environment info
+# Run it with `python collect_env.py`.
+import re
+import subprocess
+import sys
+import time
+import datetime
+import os
+from collections import namedtuple
+
+import torch
+
+PY3 = sys.version_info >= (3, 0)
+
+# System Environment Information
+SystemEnv = namedtuple('SystemEnv', [
+ 'torch_version',
+ 'is_debug_build',
+ 'cuda_compiled_version',
+ 'gcc_version',
+ 'cmake_version',
+ 'os',
+ 'python_version',
+ 'is_cuda_available',
+ 'cuda_runtime_version',
+ 'nvidia_driver_version',
+ 'nvidia_gpu_models',
+ 'cudnn_version',
+ 'pip_version', # 'pip' or 'pip3'
+ 'pip_packages',
+ 'conda_packages',
+])
+
+
+def run(command):
+ """Returns (return-code, stdout, stderr)"""
+ p = subprocess.Popen(command, stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE, shell=True)
+ output, err = p.communicate()
+ rc = p.returncode
+ if PY3:
+ output = output.decode("ascii")
+ err = err.decode("ascii")
+ return rc, output.strip(), err.strip()
+
+
+def run_and_read_all(run_lambda, command):
+ """Runs command using run_lambda; reads and returns entire output if rc is 0"""
+ rc, out, _ = run_lambda(command)
+ if rc is not 0:
+ return None
+ return out
+
+
+def run_and_parse_first_match(run_lambda, command, regex):
+ """Runs command using run_lambda, returns the first regex match if it exists"""
+ rc, out, _ = run_lambda(command)
+ if rc is not 0:
+ return None
+ match = re.search(regex, out)
+ if match is None:
+ return None
+ return match.group(1)
+
+
+def get_conda_packages(run_lambda):
+ out = run_and_read_all(run_lambda, 'conda list | grep "torch\|soumith"')
+ if out is None:
+ return out
+ # Comment starting at beginning of line
+ comment_regex = re.compile(r'^#.*\n')
+ return re.sub(comment_regex, '', out)
+
+
+def get_gcc_version(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'gcc --version', r'gcc (.*)')
+
+
+def get_cmake_version(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'cmake --version', r'cmake (.*)')
+
+
+def get_nvidia_driver_version(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'nvidia-smi', r'Driver Version: (.*?) ')
+
+
+def get_gpu_info(run_lambda):
+ uuid_regex = re.compile(' \(UUID: .+?\)')
+ rc, out, _ = run_lambda('nvidia-smi -L')
+ if rc is not 0:
+ return None
+ # Anonymize GPUs by removing their UUID
+ return re.sub(uuid_regex, '', out)
+
+
+def get_running_cuda_version(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'nvcc --version', r'V(.*)$')
+
+
+def get_cudnn_version(run_lambda):
+ """This will return a list of libcudnn.so; it's hard to tell which one is being used"""
+ rc, out, _ = run_lambda('find /usr/local /usr/lib -type f -name "libcudnn*" 2> /dev/null')
+ # find will return 1 if there are permission errors or if not found
+ if len(out) == 0:
+ return None
+ if rc != 1 and rc != 0:
+ return None
+ # Alphabetize the result because the order is non-deterministic otherwise
+ result = '\n'.join(sorted(out.split('\n')))
+ return 'Probably one of the following:\n{}'.format(result)
+
+
+def get_platform():
+ if sys.platform.startswith('linux'):
+ return 'linux'
+ elif sys.platform.startswith('win32'):
+ return 'win32'
+ elif sys.platform.startswith('cygwin'):
+ return 'cygwin'
+ elif sys.platform.startswith('darwin'):
+ return 'darwin'
+ else:
+ return sys.platform
+
+
+def get_mac_version(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'sw_vers -productVersion', r'(.*)')
+
+
+def get_windows_version(run_lambda):
+ return run_and_read_all(run_lambda, 'wmic os get Caption | findstr /v Caption')
+
+
+def get_lsb_version(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'lsb_release -a', r'Description:\t(.*)')
+
+
+def check_release_file(run_lambda):
+ return run_and_parse_first_match(run_lambda, 'cat /etc/*-release',
+ r'PRETTY_NAME="(.*)"')
+
+
+def get_os(run_lambda):
+ platform = get_platform()
+
+ if platform is 'win32' or platform is 'cygwin':
+ return get_windows_version(run_lambda)
+
+ if platform == 'darwin':
+ version = get_mac_version(run_lambda)
+ if version is None:
+ return None
+ return 'Mac OSX {}'.format(version)
+
+ if platform == 'linux':
+ # Ubuntu/Debian based
+ desc = get_lsb_version(run_lambda)
+ if desc is not None:
+ return desc
+
+ # Try reading /etc/*-release
+ desc = check_release_file(run_lambda)
+ if desc is not None:
+ return desc
+
+ return platform
+
+ # Unknown platform
+ return platform
+
+
+def get_pip_packages(run_lambda):
+ # People generally have `pip` as `pip` or `pip3`
+ def run_with_pip(pip):
+ return run_and_read_all(run_lambda, pip + ' list --format=legacy | grep "torch\|numpy"')
+
+ if not PY3:
+ return 'pip', run_with_pip('pip')
+
+ # Try to figure out if the user is running pip or pip3.
+ out2 = run_with_pip('pip')
+ out3 = run_with_pip('pip3')
+
+ num_pips = len([x for x in [out2, out3] if x is not None])
+ if num_pips is 0:
+ return 'pip', out2
+
+ if num_pips == 1:
+ if out2 is not None:
+ return 'pip', out2
+ return 'pip3', out3
+
+ # num_pips is 2. Return pip3 by default b/c that most likely
+ # is the one associated with Python 3
+ return 'pip3', out3
+
+
+def get_env_info():
+ run_lambda = run
+ pip_version, pip_list_output = get_pip_packages(run_lambda)
+
+ return SystemEnv(
+ torch_version=torch.__version__,
+ is_debug_build=torch.version.debug,
+ python_version='{}.{}'.format(sys.version_info[0], sys.version_info[1]),
+ is_cuda_available=torch.cuda.is_available(),
+ cuda_compiled_version=torch.version.cuda,
+ cuda_runtime_version=get_running_cuda_version(run_lambda),
+ nvidia_gpu_models=get_gpu_info(run_lambda),
+ nvidia_driver_version=get_nvidia_driver_version(run_lambda),
+ cudnn_version=get_cudnn_version(run_lambda),
+ pip_version=pip_version,
+ pip_packages=pip_list_output,
+ conda_packages=get_conda_packages(run_lambda),
+ os=get_os(run_lambda),
+ gcc_version=get_gcc_version(run_lambda),
+ cmake_version=get_cmake_version(run_lambda),
+ )
+
+env_info_fmt = """
+PyTorch version: {torch_version}
+Is debug build: {is_debug_build}
+CUDA used to build PyTorch: {cuda_compiled_version}
+
+OS: {os}
+GCC version: {gcc_version}
+CMake version: {cmake_version}
+
+Python version: {python_version}
+Is CUDA available: {is_cuda_available}
+CUDA runtime version: {cuda_runtime_version}
+GPU models and configuration: {nvidia_gpu_models}
+Nvidia driver version: {nvidia_driver_version}
+cuDNN version: {cudnn_version}
+
+Versions of relevant libraries:
+{pip_packages}
+{conda_packages}
+""".strip()
+
+
+def pretty_str(envinfo):
+ def replace_nones(dct, replacement='Could not collect'):
+ for key in dct.keys():
+ if dct[key] is not None:
+ continue
+ dct[key] = replacement
+ return dct
+
+ def replace_bools(dct, true='Yes', false='No'):
+ for key in dct.keys():
+ if dct[key] is True:
+ dct[key] = true
+ elif dct[key] is False:
+ dct[key] = false
+ return dct
+
+ def prepend(text, tag='[prepend]'):
+ lines = text.split('\n')
+ updated_lines = [tag + line for line in lines]
+ return '\n'.join(updated_lines)
+
+ def replace_if_empty(text, replacement='No relevant packages'):
+ if text is not None and len(text) == 0:
+ return replacement
+ return text
+
+ def maybe_start_on_next_line(string):
+ # If `string` is multiline, prepend a \n to it.
+ if string is not None and len(string.split('\n')) > 1:
+ return '\n{}\n'.format(string)
+ return string
+
+ mutable_dict = envinfo._asdict()
+
+ # If nvidia_gpu_models is multiline, start on the next line
+ mutable_dict['nvidia_gpu_models'] = \
+ maybe_start_on_next_line(envinfo.nvidia_gpu_models)
+
+ # If the machine doesn't have CUDA, report some fields as 'No CUDA'
+ dynamic_cuda_fields = [
+ 'cuda_runtime_version',
+ 'nvidia_gpu_models',
+ 'nvidia_driver_version',
+ ]
+ all_cuda_fields = dynamic_cuda_fields + ['cudnn_version']
+ all_dynamic_cuda_fields_missing = all(
+ mutable_dict[field] is None for field in dynamic_cuda_fields)
+ if not torch.cuda.is_available() and all_dynamic_cuda_fields_missing:
+ for field in all_cuda_fields:
+ mutable_dict[field] = 'No CUDA'
+ if envinfo.cuda_compiled_version is None:
+ mutable_dict['cuda_compiled_version'] = 'None'
+
+ # Replace True with Yes, False with No
+ mutable_dict = replace_bools(mutable_dict)
+
+ # Replace all None objects with 'Could not collect'
+ mutable_dict = replace_nones(mutable_dict)
+
+ # If either of these are '', replace with 'No relevant packages'
+ mutable_dict['pip_packages'] = replace_if_empty(mutable_dict['pip_packages'])
+ mutable_dict['conda_packages'] = replace_if_empty(mutable_dict['conda_packages'])
+
+ # Tag conda and pip packages with a prefix
+ # If they were previously None, they'll show up as ie '[conda] Could not collect'
+ if mutable_dict['pip_packages']:
+ mutable_dict['pip_packages'] = prepend(mutable_dict['pip_packages'],
+ '[{}] '.format(envinfo.pip_version))
+ if mutable_dict['conda_packages']:
+ mutable_dict['conda_packages'] = prepend(mutable_dict['conda_packages'],
+ '[conda] ')
+ return env_info_fmt.format(**mutable_dict)
+
+
+def get_pretty_env_info():
+ return pretty_str(get_env_info())
+
+
+def main():
+ print("Collecting environment information...")
+ output = get_pretty_env_info()
+ print(output)
+
+
+if __name__ == '__main__':
+ main()
| diff --git a/test/expect/TestCollectEnv.test_pytorch_linux_trusty_py27.expect b/test/expect/TestCollectEnv.test_pytorch_linux_trusty_py27.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestCollectEnv.test_pytorch_linux_trusty_py27.expect
@@ -0,0 +1,19 @@
+PyTorch version: 0.4.0a0
+Is debug build: No
+CUDA used to build PyTorch: None
+
+OS: Ubuntu 14.04.5 LTS
+GCC version: (Ubuntu 4.8.4-2ubuntu1~14.04.4) 4.8.4
+CMake version: version 3.5.1
+
+Python version: 2.7
+Is CUDA available: No
+CUDA runtime version: No CUDA
+GPU models and configuration: No CUDA
+Nvidia driver version: No CUDA
+cuDNN version: No CUDA
+
+Versions of relevant libraries:
+[pip] numpy (1.14.2)
+[pip] torch (0.4.0a0)
+[conda] Could not collect
diff --git a/test/expect/TestCollectEnv.test_pytorch_linux_xenial_cuda9_cudnn7_py3.expect b/test/expect/TestCollectEnv.test_pytorch_linux_xenial_cuda9_cudnn7_py3.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestCollectEnv.test_pytorch_linux_xenial_cuda9_cudnn7_py3.expect
@@ -0,0 +1,25 @@
+PyTorch version: 0.4.0a0
+Is debug build: No
+CUDA used to build PyTorch: 9.0.176
+
+OS: Ubuntu 16.04.4 LTS
+GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
+CMake version: version 3.9.4
+
+Python version: 3.6
+Is CUDA available: Yes
+CUDA runtime version: 9.0.176
+GPU models and configuration:
+GPU 0: Tesla M60
+GPU 1: Tesla M60
+
+Nvidia driver version: 384.111
+cuDNN version: Probably one of the following:
+/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.2
+/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
+
+Versions of relevant libraries:
+[pip] numpy (1.14.2)
+[pip] torch (0.4.0a0)
+[conda] magma-cuda90 2.3.0 1 soumith
+[conda] torch 0.4.0a0 <pip>
diff --git a/test/expect/TestCollectEnv.test_pytorch_macos_1013_py3.expect b/test/expect/TestCollectEnv.test_pytorch_macos_1013_py3.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestCollectEnv.test_pytorch_macos_1013_py3.expect
@@ -0,0 +1,19 @@
+PyTorch version: 0.4.0a0
+Is debug build: No
+CUDA used to build PyTorch: None
+
+OS: Mac OSX 10.13.3
+GCC version: Could not collect
+CMake version: version 3.9.4
+
+Python version: 3.6
+Is CUDA available: No
+CUDA runtime version: No CUDA
+GPU models and configuration: No CUDA
+Nvidia driver version: No CUDA
+cuDNN version: No CUDA
+
+Versions of relevant libraries:
+[pip] numpy (1.14.2)
+[pip] torch (0.4.0a0)
+[conda] torch 0.4.0a0 <pip>
diff --git a/test/expect/TestCollectEnv.test_pytorch_win_ws2016_cuda9_cudnn7_py3.expect b/test/expect/TestCollectEnv.test_pytorch_win_ws2016_cuda9_cudnn7_py3.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestCollectEnv.test_pytorch_win_ws2016_cuda9_cudnn7_py3.expect
@@ -0,0 +1,18 @@
+PyTorch version: 0.4.0a0
+Is debug build: No
+CUDA used to build PyTorch: 9.0
+
+OS: Microsoft Windows Server 2012 R2 Standard
+GCC version: Could not collect
+CMake version: version 3.10.2
+
+Python version: 3.6
+Is CUDA available: Yes
+CUDA runtime version: 9.0.176
+GPU models and configuration: Could not collect
+Nvidia driver version: Could not collect
+cuDNN version: Could not collect
+
+Versions of relevant libraries:
+[pip] numpy (1.14.2)
+[conda] Could not collect
diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -515,7 +515,6 @@ def _run(self, command):
return (rc, output, err)
def _run_bottleneck(self, test_file, scriptargs=''):
- import os
curdir = os.path.dirname(os.path.abspath(__file__))
filepath = '{}/{}'.format(curdir, test_file)
if scriptargs != '':
@@ -596,6 +595,53 @@ def test_bottleneck_cuda(self):
self._check_cuda(out)
+from torch.utils.collect_env import get_pretty_env_info
+
+
+class TestCollectEnv(TestCase):
+
+ def _build_env_to_expect(self, build_env):
+ return 'expect/TestCollectEnv.test_{}.expect'.format(
+ build_env.replace('.', '').replace('-', '_'))
+
+ def _preprocess_info_for_test(self, info_output):
+ # Remove the version hash
+ version_hash_regex = re.compile(r'(a\d+)\+.......')
+ return re.sub(version_hash_regex, r'\1', info_output).strip()
+
+ def assertExpectedOutput(self, info_output, build_env):
+ processed_info = self._preprocess_info_for_test(info_output)
+ expect_filename = self._build_env_to_expect(build_env)
+
+ ci_warning = ('This test will error out if the CI config was recently '
+ 'updated. If this is the case, please update the expect '
+ 'files to match the CI machines\' system config.')
+
+ with open(expect_filename, 'r') as f:
+ expected_info = f.read().strip()
+ self.assertEqual(processed_info, expected_info, ci_warning)
+
+ def test_smoke(self):
+ info_output = get_pretty_env_info()
+ self.assertTrue(info_output.count('\n') >= 17)
+
+ @unittest.skipIf('BUILD_ENVIRONMENT' not in os.environ.keys(), 'CI-only test')
+ def test_expect(self):
+ info_output = get_pretty_env_info()
+
+ ci_build_envs = [
+ 'pytorch-linux-trusty-py2.7',
+ 'pytorch-linux-xenial-cuda9-cudnn7-py3',
+ 'pytorch-macos-10.13-py3',
+ 'pytorch-win-ws2016-cuda9-cudnn7-py3'
+ ]
+ build_env = os.environ['BUILD_ENVIRONMENT']
+ if build_env not in ci_build_envs:
+ return
+
+ self.assertExpectedOutput(info_output, build_env)
+
+
class TestONNXUtils(TestCase):
def test_prepare_onnx_paddings(self):
sizes = [2, 3, 4]
| Report bug script
In our issue submission checklist, we collect a lot of information that can be gotten mechanically. We should have a simple way of collecting this info as a script and uploading it to GitHub.
| I can work on this. `torch.utils.bottleneck` already contains an environment analyzer:
```
--------------------------------------------------------------------------------
Environment Summary
--------------------------------------------------------------------------------
PyTorch 0.4.0a0+4c89349 DEBUG compiled w/ CUDA 8.0.44
Running with Python 2.7 and CUDA 8.0.44
`pip list` truncated output:
torch (0.4.0a0+4c89349, /home/rzou/pytorch)
``` | 2018-04-16T21:19:09 |
pytorch/pytorch | 6,671 | pytorch__pytorch-6671 | [
"6461"
] | 63d42408d01c4509ff57c623b152e4a1c90673a8 | diff --git a/torch/utils/data/dataloader.py b/torch/utils/data/dataloader.py
--- a/torch/utils/data/dataloader.py
+++ b/torch/utils/data/dataloader.py
@@ -320,17 +320,21 @@ def _shutdown_workers(self):
if not self.shutdown:
self.shutdown = True
self.done_event.set()
- # if worker_manager_thread is waiting to put, make place for it
+ for q in self.index_queues:
+ q.put(None)
+ # if some workers are waiting to put, make place for them
try:
- while not self.data_queue.empty():
- self.data_queue.get()
- except FileNotFoundError:
+ while not self.worker_result_queue.empty():
+ self.worker_result_queue.get()
+ except (FileNotFoundError, ImportError):
+ # Many weird errors can happen here due to Python
+ # shutting down. These are more like obscure Python bugs.
# FileNotFoundError can happen when we rebuild the fd
# fetched from the queue but the socket is already closed
- # from the worker side (e.g. due to Python shutting down).
+ # from the worker side.
+ # ImportError can happen when the unpickler loads the
+ # resource from `get`.
pass
- for q in self.index_queues:
- q.put(None)
# done_event should be sufficient to exit worker_manager_thread,
# but be safe here and put another None
self.worker_result_queue.put(None)
| Error when exiting tutorial script
OS:
Ubuntu 18.04
PyTorch version:
0.3.1.post2
Python version:
3.6.5
How you installed PyTorch:
conda install pytorch-cpu torchvision -c pytorch
Script to reproduce the bug:
the tutorial script for cifar10 classification: http://pytorch.org/tutorials/_downloads/cifar10_tutorial.py
At the end of the script execution, I have the following error message:
```
Exception ignored in: <bound method DataLoaderIter.__del__ of <torch.utils.data.dataloader.DataLoaderIter object at 0x7f0b579b8470>>
Traceback (most recent call last):
File "/home/snakeone/anaconda3/envs/torch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 333, in __del__
File "/home/snakeone/anaconda3/envs/torch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 319, in _shutdown_workers
File "/home/snakeone/anaconda3/envs/torch/lib/python3.6/multiprocessing/queues.py", line 337, in get
ImportError: sys.meta_path is None, Python is likely shutting down
```
| I am aware of this and will have a fix. | 2018-04-17T18:32:43 |
|
pytorch/pytorch | 6,679 | pytorch__pytorch-6679 | [
"6417"
] | 459dfdc3047c997047cb9fa4e83e197a1d16212d | diff --git a/torch/nn/modules/conv.py b/torch/nn/modules/conv.py
--- a/torch/nn/modules/conv.py
+++ b/torch/nn/modules/conv.py
@@ -502,13 +502,27 @@ class ConvTranspose1d(_ConvTransposeMixin, _ConvNd):
and not a full `cross-correlation`_.
It is up to the user to add proper padding.
+ .. note::
+ The :attr:`padding` argument effectively adds ``kernel_size - 1 - padding``
+ amount of zero padding to both sizes of the input. This is set so that
+ when a :class:`~torch.nn.Conv1d` and a :class:`~torch.nn.ConvTranspose1d`
+ are initialized with same parameters, they are inverses of each other in
+ regard to the input and output shapes. However, when :attr`stride` ``>1``,
+ :class:`~torch.nn.Conv1d` maps multiple input shapes to the same output
+ shape. :attr:`output_padding` is provided to resolve this ambiguity by
+ effectively increasing the calculated output shape on one side. Note
+ that :attr:`output_padding` is only used to find output shape, but does
+ not actually add zero-padding to output.
+
Args:
in_channels (int): Number of channels in the input image
out_channels (int): Number of channels produced by the convolution
kernel_size (int or tuple): Size of the convolving kernel
stride (int or tuple, optional): Stride of the convolution. Default: 1
- padding (int or tuple, optional): Zero-padding added to both sides of the input. Default: 0
- output_padding (int or tuple, optional): Zero-padding added to one side of the output. Default: 0
+ padding (int or tuple, optional): ``kernel_size - 1 - padding`` zero-padding
+ will be added to both sides of the input. Default: 0
+ output_padding (int or tuple, optional): Additional size added to one side
+ of the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional): If ``True``, adds a learnable bias to the output. Default: ``True``
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
@@ -591,13 +605,27 @@ class ConvTranspose2d(_ConvTransposeMixin, _ConvNd):
and not a full `cross-correlation`_.
It is up to the user to add proper padding.
+ .. note::
+ The :attr:`padding` argument effectively adds ``kernel_size - 1 - padding``
+ amount of zero padding to both sizes of the input. This is set so that
+ when a :class:`~torch.nn.Conv2d` and a :class:`~torch.nn.ConvTranspose2d`
+ are initialized with same parameters, they are inverses of each other in
+ regard to the input and output shapes. However, when :attr`stride` ``>1``,
+ :class:`~torch.nn.Conv2d` maps multiple input shapes to the same output
+ shape. :attr:`output_padding` is provided to resolve this ambiguity by
+ effectively increasing the calculated output shape on one side. Note
+ that :attr:`output_padding` is only used to find output shape, but does
+ not actually add zero-padding to output.
+
Args:
in_channels (int): Number of channels in the input image
out_channels (int): Number of channels produced by the convolution
kernel_size (int or tuple): Size of the convolving kernel
stride (int or tuple, optional): Stride of the convolution. Default: 1
- padding (int or tuple, optional): Zero-padding added to both sides of the input. Default: 0
- output_padding (int or tuple, optional): Zero-padding added to one side of the output. Default: 0
+ padding (int or tuple, optional): ``kernel_size - 1 - padding`` zero-padding
+ will be added to both sides of each dimension in the input. Default: 0
+ output_padding (int or tuple, optional): Additional size added to one side
+ of each dimension in the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional): If ``True``, adds a learnable bias to the output. Default: ``True``
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
@@ -711,13 +739,27 @@ class ConvTranspose3d(_ConvTransposeMixin, _ConvNd):
and not a full `cross-correlation`_.
It is up to the user to add proper padding.
+ .. note::
+ The :attr:`padding` argument effectively adds ``kernel_size - 1 - padding``
+ amount of zero padding to both sizes of the input. This is set so that
+ when a :class:`~torch.nn.Conv3d` and a :class:`~torch.nn.ConvTranspose3d`
+ are initialized with same parameters, they are inverses of each other in
+ regard to the input and output shapes. However, when :attr`stride` ``>1``,
+ :class:`~torch.nn.Conv3d` maps multiple input shapes to the same output
+ shape. :attr:`output_padding` is provided to resolve this ambiguity by
+ effectively increasing the calculated output shape on one side. Note
+ that :attr:`output_padding` is only used to find output shape, but does
+ not actually add zero-padding to output.
+
Args:
in_channels (int): Number of channels in the input image
out_channels (int): Number of channels produced by the convolution
kernel_size (int or tuple): Size of the convolving kernel
stride (int or tuple, optional): Stride of the convolution. Default: 1
- padding (int or tuple, optional): Zero-padding added to all three sides of the input. Default: 0
- output_padding (int or tuple, optional): Zero-padding added to one side of the output. Default: 0
+ padding (int or tuple, optional): ``kernel_size - 1 - padding`` zero-padding
+ will be added to both sides of each dimension in the input. Default: 0
+ output_padding (int or tuple, optional): Additional size added to one side
+ of each dimension in the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional): If ``True``, adds a learnable bias to the output. Default: ``True``
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
| [PyTorch] Incorrect padding doc for ConvTranspose*
Incorrect `padding` doc for ConvTranspose*. Current doc says
> padding controls the amount of implicit zero-paddings on both sides for padding number of points for each dimension.
However, it really is meant to match the `padding` arg of the vanilla Conv. Therefore, effectively it adds `kernel_size - 1 - padding` amount of zero-paddings on the input.
| apparently `output_padding` isn't documented properly as well. | 2018-04-17T21:25:40 |
|
pytorch/pytorch | 6,684 | pytorch__pytorch-6684 | [
"6353"
] | c43c911662600e3d0a780ce3d23bd9d853ded590 | diff --git a/torch/utils/cpp_extension.py b/torch/utils/cpp_extension.py
--- a/torch/utils/cpp_extension.py
+++ b/torch/utils/cpp_extension.py
@@ -11,6 +11,7 @@
import warnings
import torch
+from .file_baton import FileBaton
from setuptools.command.build_ext import build_ext
@@ -30,7 +31,8 @@ def _find_cuda_home():
# Guess #3
try:
which = 'where' if sys.platform == 'win32' else 'which'
- nvcc = subprocess.check_output([which, 'nvcc']).decode().rstrip('\r\n')
+ nvcc = subprocess.check_output(
+ [which, 'nvcc']).decode().rstrip('\r\n')
cuda_home = os.path.dirname(os.path.dirname(nvcc))
except Exception:
cuda_home = None
@@ -71,7 +73,8 @@ def check_compiler_abi_compatibility(compiler):
'''
try:
check_cmd = '{}' if sys.platform == 'win32' else '{} --version'
- info = subprocess.check_output(check_cmd.format(compiler).split(), stderr=subprocess.STDOUT)
+ info = subprocess.check_output(
+ check_cmd.format(compiler).split(), stderr=subprocess.STDOUT)
except Exception:
_, error, _ = sys.exc_info()
warnings.warn('Error checking compiler version: {}'.format(error))
@@ -93,7 +96,8 @@ def check_compiler_abi_compatibility(compiler):
version = re.search(r'(\d+)\.(\d+)\.(\d+)', info)
if version is not None:
major, minor, revision = version.groups()
- if (int(major), int(minor), int(revision)) >= MINIMUM_MSVC_VERSION:
+ if (int(major), int(minor),
+ int(revision)) >= MINIMUM_MSVC_VERSION:
return True
else:
# Append the detected version for the warning.
@@ -156,9 +160,14 @@ def unix_wrap_compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
# Put the original compiler back in place.
self.compiler.set_executable('compiler_so', original_compiler)
- def win_wrap_compile(sources, output_dir=None, macros=None,
- include_dirs=None, debug=0, extra_preargs=None,
- extra_postargs=None, depends=None):
+ def win_wrap_compile(sources,
+ output_dir=None,
+ macros=None,
+ include_dirs=None,
+ debug=0,
+ extra_preargs=None,
+ extra_postargs=None,
+ depends=None):
self.cflags = copy.deepcopy(extra_postargs)
extra_postargs = None
@@ -168,16 +177,22 @@ def spawn(cmd):
# Using regex to match src, obj and include files
src_regex = re.compile('/T(p|c)(.*)')
- src_list = [m.group(2) for m in (
- src_regex.match(elem) for elem in cmd) if m]
+ src_list = [
+ m.group(2) for m in (src_regex.match(elem) for elem in cmd)
+ if m
+ ]
obj_regex = re.compile('/Fo(.*)')
- obj_list = [m.group(1) for m in (
- obj_regex.match(elem) for elem in cmd) if m]
+ obj_list = [
+ m.group(1) for m in (obj_regex.match(elem) for elem in cmd)
+ if m
+ ]
include_regex = re.compile(r'((\-|\/)I.*)')
- include_list = [m.group(1) for m in (
- include_regex.match(elem) for elem in cmd) if m]
+ include_list = [
+ m.group(1)
+ for m in (include_regex.match(elem) for elem in cmd) if m
+ ]
if len(src_list) >= 1 and len(obj_list) >= 1:
src = src_list[0]
@@ -190,8 +205,10 @@ def spawn(cmd):
cflags = self.cflags
else:
cflags = []
- cmd = [nvcc, '-c', src, '-o', obj, '-Xcompiler',
- '/wd4819', '-Xcompiler', '/MD'] + include_list + cflags
+ cmd = [
+ nvcc, '-c', src, '-o', obj, '-Xcompiler',
+ '/wd4819', '-Xcompiler', '/MD'
+ ] + include_list + cflags
elif isinstance(self.cflags, dict):
cflags = self.cflags['cxx']
cmd += cflags
@@ -203,9 +220,9 @@ def spawn(cmd):
try:
self.compiler.spawn = spawn
- return original_compile(sources,
- output_dir, macros, include_dirs, debug,
- extra_preargs, extra_postargs, depends)
+ return original_compile(sources, output_dir, macros,
+ include_dirs, debug, extra_preargs,
+ extra_postargs, depends)
finally:
self.compiler.spawn = original_spawn
@@ -454,7 +471,57 @@ def load(name,
if build_directory is None:
build_directory = _get_build_directory(name, verbose)
- extra_ldflags = extra_ldflags or []
+ baton = FileBaton(os.path.join(build_directory, 'lock'))
+
+ if baton.try_acquire():
+ try:
+ with_cuda = any(map(_is_cuda_file, sources))
+ extra_ldflags = _prepare_ldflags(
+ extra_ldflags or [],
+ with_cuda,
+ verbose)
+ build_file_path = os.path.join(build_directory, 'build.ninja')
+ if verbose:
+ print(
+ 'Emitting ninja build file {}...'.format(build_file_path))
+ # NOTE: Emitting a new ninja build file does not cause re-compilation if
+ # the sources did not change, so it's ok to re-emit (and it's fast).
+ _write_ninja_file(
+ path=build_file_path,
+ name=name,
+ sources=sources,
+ extra_cflags=extra_cflags or [],
+ extra_cuda_cflags=extra_cuda_cflags or [],
+ extra_ldflags=extra_ldflags or [],
+ extra_include_paths=extra_include_paths or [],
+ with_cuda=with_cuda)
+
+ if verbose:
+ print('Building extension module {}...'.format(name))
+ _build_extension_module(name, build_directory)
+ finally:
+ baton.release()
+ else:
+ baton.wait()
+
+ if verbose:
+ print('Loading extension module {}...'.format(name))
+ return _import_module_from_library(name, build_directory)
+
+
+def verify_ninja_availability():
+ '''
+ Returns ``True`` if the `ninja <https://ninja-build.org/>`_ build system is
+ available on the system.
+ '''
+ with open(os.devnull, 'wb') as devnull:
+ try:
+ subprocess.check_call('ninja --version'.split(), stdout=devnull)
+ except OSError:
+ raise RuntimeError("Ninja is required to load C++ extensions")
+
+
+def _prepare_ldflags(extra_ldflags, with_cuda, verbose):
if sys.platform == 'win32':
python_path = os.path.dirname(sys.executable)
python_lib_path = os.path.join(python_path, 'libs')
@@ -468,51 +535,18 @@ def load(name,
extra_ldflags.append('/LIBPATH:{}'.format(python_lib_path))
extra_ldflags.append('/LIBPATH:{}'.format(lib_path))
- with_cuda = any(map(_is_cuda_file, sources))
if with_cuda:
if verbose:
print('Detected CUDA files, patching ldflags')
if sys.platform == 'win32':
- extra_ldflags.append('/LIBPATH:{}'.format(_join_cuda_home('lib/x64')))
+ extra_ldflags.append('/LIBPATH:{}'.format(
+ _join_cuda_home('lib/x64')))
extra_ldflags.append('cudart.lib')
else:
extra_ldflags.append('-L{}'.format(_join_cuda_home('lib64')))
extra_ldflags.append('-lcudart')
- build_file_path = os.path.join(build_directory, 'build.ninja')
- if verbose:
- print('Emitting ninja build file {}...'.format(build_file_path))
- # NOTE: Emitting a new ninja build file does not cause re-compilation if
- # the sources did not change, so it's ok to re-emit (and it's fast).
- _write_ninja_file(
- path=build_file_path,
- name=name,
- sources=sources,
- extra_cflags=extra_cflags or [],
- extra_cuda_cflags=extra_cuda_cflags or [],
- extra_ldflags=extra_ldflags or [],
- extra_include_paths=extra_include_paths or [],
- with_cuda=with_cuda)
-
- if verbose:
- print('Building extension module {}...'.format(name))
- _build_extension_module(name, build_directory)
-
- if verbose:
- print('Loading extension module {}...'.format(name))
- return _import_module_from_library(name, build_directory)
-
-
-def verify_ninja_availability():
- '''
- Returns ``True`` if the `ninja <https://ninja-build.org/>`_ build system is
- available on the system.
- '''
- with open(os.devnull, 'wb') as devnull:
- try:
- subprocess.check_call('ninja --version'.split(), stdout=devnull)
- except OSError:
- raise RuntimeError("Ninja is required to load C++ extensions")
+ return extra_ldflags
def _get_build_directory(name, verbose):
@@ -631,12 +665,15 @@ def _write_ninja_file(path,
link_rule = ['rule link']
if sys.platform == 'win32':
- cl_paths = subprocess.check_output(['where', 'cl']).decode().split('\r\n')
+ cl_paths = subprocess.check_output(['where',
+ 'cl']).decode().split('\r\n')
if len(cl_paths) >= 1:
cl_path = os.path.dirname(cl_paths[0]).replace(':', '$:')
else:
raise RuntimeError("MSVC is required to load C++ extensions")
- link_rule.append(' command = "{}/link.exe" $in /nologo $ldflags /out:$out'.format(cl_path))
+ link_rule.append(
+ ' command = "{}/link.exe" $in /nologo $ldflags /out:$out'.format(
+ cl_path))
else:
link_rule.append(' command = $cxx $ldflags $in -o $out')
diff --git a/torch/utils/file_baton.py b/torch/utils/file_baton.py
new file mode 100644
--- /dev/null
+++ b/torch/utils/file_baton.py
@@ -0,0 +1,47 @@
+import os
+import time
+
+
+class FileBaton:
+ '''A primitive, file-based synchronization utility.'''
+
+ def __init__(self, lock_file_path, wait_seconds=0.1):
+ '''
+ Creates a new :class:`FileBaton`.
+
+ Args:
+ lock_file_path: The path to the file used for locking.
+ wait_seconds: The seconds to periorically sleep (spin) when
+ calling ``wait()``.
+ '''
+ self.lock_file_path = lock_file_path
+ self.wait_seconds = wait_seconds
+ self.fd = None
+
+ def try_acquire(self):
+ '''
+ Tries to atomically create a file under exclusive access.
+
+ Returns:
+ True if the file could be created, else False.
+ '''
+ try:
+ self.fd = os.open(self.lock_file_path, os.O_CREAT | os.O_EXCL)
+ return True
+ except FileExistsError:
+ return False
+
+ def wait(self):
+ '''
+ Periodically sleeps for a certain amount until the baton is released.
+
+ The amount of time slept depends on the ``wait_seconds`` parameter
+ passed to the constructor.
+ '''
+ while os.path.exists(self.lock_file_path):
+ time.sleep(self.wait_seconds)
+
+ def release(self):
+ '''Releaes the baton and removes its file.'''
+ os.close(self.fd)
+ os.remove(self.lock_file_path)
| [jit cpp extensions] Possible file corruption in distributed setting
I believe there might be some sort of race condition that might happen when compiling jit cpp extensions in a distributed setting.
Here a summary of what happens:
- I launch my code from multiple different processes (to use with DistributedDataParallel, via `torch.distributed.launch`)
- each process starts to compile the extension
- they all try to save the extension to the same file
- sometimes, the file gets corrupted because of that
Here is part of the stack trace of the error I obtain (the rest is just the same error repeated):
```
1 0: Traceback (most recent call last):
2 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/training_faster_rcnn.py", line 190, in <module>
3 0: from lib.faster_rcnn import fasterrcnn_resnet18, C2ResNetFasterRCNN
4 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/lib/faster_rcnn.py", line 13, in <module>
5 0: from lib.layers import ROIAlign, ROIPool, FixedBatchNorm2d
6 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/lib/layers/__init__.py", line 28, in <module>
7 0: _C = _load_C_extensions()
8 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/lib/layers/__init__.py", line 26, in _load_C_extensions
9 0: '-gencode arch=compute_70,code=sm_70'])
10 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 330, in load
11 0: build_directory = _get_build_directory(name, verbose)
12 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 392, in _get_build_directory
13 0: os.makedirs(build_directory)
14 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/os.py", line 220, in makedirs
15 0: mkdir(name, mode)
16 0: FileExistsError: [Errno 17] File exists: '/tmp/torch_extensions/detectron_modules'
17 0: Traceback (most recent call last):
18 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/training_faster_rcnn.py", line 190, in <module>
19 0: from lib.faster_rcnn import fasterrcnn_resnet18, C2ResNetFasterRCNN
20 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/lib/faster_rcnn.py", line 13, in <module>
21 0: from lib.layers import ROIAlign, ROIPool, FixedBatchNorm2d
22 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/lib/layers/__init__.py", line 28, in <module>
23 0: _C = _load_C_extensions()
24 0: File "/private/home/fmassa/github/detectron.pytorch/torch_detectron/lib/layers/__init__.py", line 26, in _load_C_extensions
25 0: '-gencode arch=compute_70,code=sm_70'])
26 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 361, in load
27 0: return _import_module_from_library(name, build_directory)
28 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 414, in _import_module_from_library
29 0: return imp.load_module(module_name, file, path, description)
30 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/imp.py", line 243, in load_module
31 0: return load_dynamic(name, filename, file)
32 0: File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/imp.py", line 343, in load_dynamic
33 0: return _load(spec)
34 0: ImportError: /tmp/torch_extensions/detectron_modules/detectron_modules.so: file too short
...
```
We can see that two types of errors pop out: `FileExistsError` and `ImportError: modules.so: file too short`.
Let me know if you need a minimal reproducible example.
pytorch version `'0.4.0a0+b21e135'`
| Here is a `mwe.py` (note: you need to modify the path to `pytorch_dir`)
```python
import argparse
def compile_ext():
import os.path
from torch.utils.cpp_extension import load as load_ext
pytorch_dir = '/private/home/fmassa/github/pytorch/test/cpp_extensions'
source = [
'extension.cpp',
]
source = [os.path.join(pytorch_dir, s) for s in source]
return load_ext('multiprocess_bug', source)
_C = compile_ext()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Bug!')
parser.add_argument('--local_rank', default=0)
args = parser.parse_args()
print(args)
```
Run it with
```
python -m torch.distributed.launch --nproc_per_node=8 mwe.py
```
On the first run I tried, I got:
```
Traceback (most recent call last):
File "mwe.py", line 16, in <module>
_C = compile_ext()
File "mwe.py", line 13, in compile_ext
return load_ext('multiprocess_bug', source)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 361, in load
return _import_module_from_library(name, build_directory)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 411, in _import_module_from_library
file, path, description = imp.find_module(module_name, [path])
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'multiprocess_bug'
Traceback (most recent call last):
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 400, in _build_extension_module
['ninja', '-v'], stderr=subprocess.STDOUT, cwd=build_directory)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/subprocess.py", line 336, in check_output
**kwargs).stdout
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "mwe.py", line 16, in <module>
_C = compile_ext()
File "mwe.py", line 13, in compile_ext
return load_ext('multiprocess_bug', source)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 357, in load
_build_extension_module(name, build_directory)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 406, in _build_extension_module
name, error.output.decode()))
RuntimeError: Error building extension 'multiprocess_bug': [1/2] c++ -MMD -MF extension.o.d -DTORCH_EXTENSION_NAME=multiprocess_bug -I/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/lib/include -I/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/lib/include/TH -I/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/lib/include/THC -I/private/home/fmassa/.conda/envs/detectron_v2/include/python3.6m -fPIC -std=c++11 -c /private/home/fmassa/github/pytorch/test/cpp_extensions/extension.cpp -o extension.o
[2/2] c++ -shared extension.o -o multiprocess_bug.so
FAILED: multiprocess_bug.so
c++ -shared extension.o -o multiprocess_bug.so
extension.o: file not recognized: File truncated
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "mwe.py", line 16, in <module>
_C = compile_ext()
File "mwe.py", line 13, in compile_ext
return load_ext('multiprocess_bug', source)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 361, in load
return _import_module_from_library(name, build_directory)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 411, in _import_module_from_library
file, path, description = imp.find_module(module_name, [path])
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'multiprocess_bug'
Traceback (most recent call last):
File "mwe.py", line 16, in <module>
_C = compile_ext()
File "mwe.py", line 13, in compile_ext
return load_ext('multiprocess_bug', source)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 361, in load
return _import_module_from_library(name, build_directory)
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 411, in _import_module_from_library
file, path, description = imp.find_module(module_name, [path])
File "/private/home/fmassa/.conda/envs/detectron_v2/lib/python3.6/imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'multiprocess_bug'
``` | 2018-04-17T23:53:28 |
|
pytorch/pytorch | 6,718 | pytorch__pytorch-6718 | [
"6479"
] | a4dbd374038803e0db1c01d1096eb83a3a2897a8 | diff --git a/tools/autograd/gen_autograd.py b/tools/autograd/gen_autograd.py
--- a/tools/autograd/gen_autograd.py
+++ b/tools/autograd/gen_autograd.py
@@ -19,7 +19,7 @@
deprecated_path = os.path.join(os.path.dirname(__file__), 'deprecated.yaml')
VIEW_FUNCTIONS = {
- 'alias', 'as_strided', 'expand', 'narrow', 'permute', 'select', 'slice',
+ 'alias', 'as_strided', 'diagonal', 'expand', 'narrow', 'permute', 'select', 'slice',
'squeeze', 't', 'transpose', 'unfold', 'unsqueeze', 'view',
}
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -1267,9 +1267,11 @@
add_docstr(torch.diagonal,
r"""
-diagonal(input, offset=0) -> Tensor
+diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor
-Returns a 1-D tensor with the diagonal elements of :attr:`input`.
+Returns a partial view of :attr:`input` with the its diagonal elements
+with respect to :attr:`dim1` and :attr:`dim2` appended as a dimension
+at the end of the shape.
The argument :attr:`offset` controls which diagonal to consider:
@@ -1278,9 +1280,15 @@
- If :attr:`offset` < 0, it is below the main diagonal.
Args:
- input (Tensor): the input tensor. Must be 2-dimensional.
+ input (Tensor): the input tensor. Must be at least 2-dimensional.
offset (int, optional): which diagonal to consider. Default: 0
(main diagonal).
+ dim1 (int, optional): first dimension with respect to which to
+ take diagonal. Default: 0.
+ dim2 (int, optional): second dimension with respect to which to
+ take diagonal. Default: 1.
+
+.. note:: To take a batch diagonal, pass in dim1=-2, dim2=-1.
Examples::
@@ -1305,6 +1313,17 @@
-0.2239
[torch.FloatTensor of size 2]
+ >>> x = torch.randn(2, 5, 4, 2)
+ >>> torch.diagonal(x, offset=-1, dim1=1, dim2=2)
+
+ (0 ,.,.) =
+ -0.6806 -0.0281 -0.6595 -0.4199
+ 0.8741 -0.1793 -0.6997 0.6265
+
+ (1 ,.,.) =
+ 0.6182 1.3069 1.6503 1.7627
+ -0.2122 -0.2250 0.0990 -2.6433
+ [torch.FloatTensor of size (2,2,4)]
""")
add_docstr(torch.dist,
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2116,6 +2116,18 @@ def test_mul_out_result_requires_grad(self):
# we should throw an exception if the output requires grad
self.assertRaisesRegex(RuntimeError, 'out=', lambda: torch.mul(a, b, out=x))
+ def test_diagonal_derivative_requires_grad(self):
+ # test that the backward requires grad
+ # we do this is because diagonal_backward uses inplace
+ # operations and gradgradcheck does not catch whether
+ # they works as expected (it will succeed even if
+ # the gradient has requires_grad == False
+ a = torch.randn(5, 6, requires_grad=True)
+ b = torch.diagonal(a)**2
+ c = b.sum()
+ d, = torch.autograd.grad(c, a, retain_graph=True, create_graph=True)
+ self.assertTrue(d.requires_grad)
+
def index_variable(shape, max_indices):
if not isinstance(shape, tuple):
@@ -2630,6 +2642,18 @@ class dont_convert(tuple):
('diag', (M,), NO_ARGS, '1d'),
('diag', (M, M), (1,), '2d_1'),
('diag', (M, M), (2,), '2d_2'),
+ ('diagonal', (M, M), NO_ARGS, '2d'),
+ ('diagonal', (3, 5), NO_ARGS, '2d_wide'),
+ ('diagonal', (3, 5), (2,), '2d_wide_pos'),
+ ('diagonal', (3, 5), (-2,), '2d_wide_neg'),
+ ('diagonal', (5, 3), NO_ARGS, '2d_tall'),
+ ('diagonal', (5, 3), (2,), '2d_tall_pos'),
+ ('diagonal', (5, 3), (-2,), '2d_tall_neg'),
+ ('diagonal', (M, M), (1,), '2d_1'),
+ ('diagonal', (M, M), (2,), '2d_2'),
+ ('diagonal', (M, M, M), (1, 1, 2), '3d_1'),
+ ('diagonal', (M, M, M), (2, 0, 1), '3d_2'),
+ ('diagonal', (M, M, M), (-2, 0, 1), '3d_3'),
('tril', (M, M), NO_ARGS),
('tril', (M, M), (2,), 'idx'),
('triu', (M, M), NO_ARGS),
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -1909,6 +1909,25 @@ def _test_diagonal(self, dtype, device):
def test_diagonal(self):
self._test_diagonal(self, dtype=torch.float32, device='cpu')
+ @unittest.skipIf(not TEST_NUMPY, 'Numpy not found')
+ def test_diagonal_multidim(self):
+ x = torch.randn(10, 11, 12, 13)
+ xn = x.numpy()
+ for args in [(2, 2, 3),
+ (2,),
+ (-2, 1, 2),
+ (0, -2, -1)]:
+ result = torch.diagonal(x, *args)
+ expected = xn.diagonal(*args)
+ self.assertEqual(expected.shape, result.shape)
+ self.assertTrue(np.allclose(expected, result.numpy()))
+ # test non-continguous
+ xp = x.permute(1, 2, 3, 0)
+ result = torch.diagonal(xp, 0, -2, -1)
+ expected = xp.numpy().diagonal(0, -2, -1)
+ self.assertEqual(expected.shape, result.shape)
+ self.assertTrue(np.allclose(expected, result.numpy()))
+
@staticmethod
def _test_diagflat(self, dtype, device):
# Basic sanity test
| [feature request] enhance diagonal functionality
# Objective
PyTorch currently does not have a way to take diagonals with respect to arbitrary axes as e.g. `numpy.diagonal` has.
Two immediate use cases are
- Taking the diagonal of a batch of matrices. This is handy e.g. in Gaussian processes, but I have seen this a number of times.
- One of the missing `einsum` features (`torch.einsum('jii->ji', a)`) would benefit from this as well.
# Plan
I propose to do the following (see https://github.com/t-vi/pytorch/tree/diagonal_with_dim for a preliminary implementation)
- Implement `diagonal` natively in ATen instead of referring to TH/THC `diag`.
- Make `diagonal` return a view by adjusting shape, stride, and offset. `diag` copies, but this seems impractical for higher dimensional arrays and also unneeded. *This is not backward compatible.*
- Add two dimension parameters dim1 and dim2.
- Make a diagonal_backward method.
- Include the Tensor method, too.
*Edit*: Based on your feedback, I updated the code to use numpy semantics.
# ~~Limiting numpy compatibility?~~
~~NumPy's implementation has two features that might be worth to differ from:~~
- ~~NumPy's default axes are 0 and 1. I think it is more natural to use -2 and -1 ("batch thinking").~~
- ~~NumPy moves the new axis replacing the two old ones at the end of the tensor. I think it might be more natural to put it into the place of the first axis to be removed. I must admit that this is more a gut feeling than based on hard facts.~~
I would greatly appreciate your input on this, in particular regarding the potential numpy deviations.
| 2018-04-18T19:28:53 |
|
pytorch/pytorch | 6,719 | pytorch__pytorch-6719 | [
"434"
] | 96e2140ffbfd5e291baa34b347f448302de784bf | diff --git a/torch/nn/modules/rnn.py b/torch/nn/modules/rnn.py
--- a/torch/nn/modules/rnn.py
+++ b/torch/nn/modules/rnn.py
@@ -533,6 +533,7 @@ class RNNCell(RNNCellBase):
- **input** of shape `(batch, input_size)`: tensor containing input features
- **hidden** of shape `(batch, hidden_size)`: tensor containing the initial hidden
state for each element in the batch.
+ Defaults to zero if not provided.
Outputs: h'
- **h'** of shape `(batch, hidden_size)`: tensor containing the next hidden state
@@ -625,6 +626,8 @@ class LSTMCell(RNNCellBase):
- **c_0** of shape `(batch, hidden_size)`: tensor containing the initial cell state
for each element in the batch.
+ If `(h_0, c_0)` is not provided, both **h_0** and **c_0** default to zero.
+
Outputs: h_1, c_1
- **h_1** of shape `(batch, hidden_size)`: tensor containing the next hidden state
for each element in the batch
@@ -706,6 +709,7 @@ class GRUCell(RNNCellBase):
- **input** of shape `(batch, input_size)`: tensor containing input features
- **hidden** of shape `(batch, hidden_size)`: tensor containing the initial hidden
state for each element in the batch.
+ Defaults to zero if not provided.
Outputs: h'
- **h'** of shape `(batch, hidden_size)`: tensor containing the next hidden state
| Default initial hidden states for recurrent layers
The recurrent layers (nn.RNN, nn.LSTM, and nn.GRU) all expect initial hidden (+cell for LSTM) states as a required argument. Would it be possible for these to default to zero if not provided?
| Sure, that makes sense.
@apaszke Can I work on this issue? Is there a process to do that?
Sure! As far as I know no one is working on that, so go ahead if you want.
Can we add this to the RNN documentation page as well? Took me a while to find this information here..
@amrsharaf a change adding that has just been merged. Hopefully, noone will need to come here to find that out.
Thanks for suggesting that!
great, thanks!
It seems that document for [LSTMCell](http://pytorch.org/docs/stable/nn.html#lstmcell) did not add default statement while document for [LSTM](http://pytorch.org/docs/stable/nn.html#lstm) does mention the default settings. | 2018-04-18T19:33:39 |
|
pytorch/pytorch | 6,749 | pytorch__pytorch-6749 | [
"6742"
] | fff80c2c1f81095785963b4fe4315fb36b0a0deb | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -611,7 +611,7 @@ def threshold(input, threshold, value, inplace=False):
def relu(input, inplace=False):
- r"""relu(input, threshold, value, inplace=False) -> Tensor
+ r"""relu(input, inplace=False) -> Tensor
Applies the rectified linear unit function element-wise. See
:class:`~torch.nn.ReLU` for more details.
| incorrect doc for torch.nn.functional.relu
In line 613-621 of `torch/nn/functional.py`, it states
```python
def relu(input, inplace=False):
r"""relu(input, threshold, value, inplace=False) -> Tensor
Applies the rectified linear unit function element-wise. See
:class:`~torch.nn.ReLU` for more details.
"""
if inplace:
return torch.relu_(input)
return torch.relu(input)
```
It looks like the doc is incorrect. I think it should something like
```python
def relu(input, inplace=False):
r"""relu(input, inplace=False) -> Tensor
Applies the rectified linear unit function element-wise. See
:class:`~torch.nn.ReLU` for more details.
"""
if inplace:
return torch.relu_(input)
return torch.relu(input)
```
which causes the docs on the website to be incorrect.
| 2018-04-19T07:09:14 |
||
pytorch/pytorch | 6,873 | pytorch__pytorch-6873 | [
"5534"
] | 1b0ad8678bd899eba81f58fc00b03188de8e0fe5 | diff --git a/tools/autograd/gen_variable_type.py b/tools/autograd/gen_variable_type.py
--- a/tools/autograd/gen_variable_type.py
+++ b/tools/autograd/gen_variable_type.py
@@ -93,7 +93,7 @@
""")
ASSIGN_GRAD_FN = CodeTemplate("""\
-grad_fn = std::make_shared<${op}>(${op_ctor});
+grad_fn = std::shared_ptr<${op}>(new ${op}(${op_ctor}), deleteFunction);
grad_fn->set_next_edges(collect_next_edges( ${args_with_derivatives} ));
""")
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -611,6 +611,78 @@ def backward(ctx, grad_b):
TestFn.apply(b).sum().backward()
+ def test_free_deep_graph(self):
+ def scope():
+ depth = 150000
+ x = torch.randn(1, requires_grad=True)
+ y = x.clone()
+
+ # build a "chain" computation graph
+ for i in range(depth):
+ y = y + y * 0.000001
+
+ # triggers graph deletion
+ del x
+
+ # Should not stack overflow
+ scope()
+
+ def test_free_deep_graph_complicated(self):
+ def scope():
+ depth = 100000
+ randchoice = torch.randint(2, [depth, 2])
+ x = torch.randn(1, requires_grad=True)
+ y = x.clone()
+
+ # Hold the two previous values
+ prev_values = [None, None]
+
+ # Build a "chain with skip connections" graph
+ for i in range(depth):
+ prev_tensors = [tensor for tensor in prev_values[:-1]
+ if tensor is not None]
+ prev_values.append(y)
+ prev_values.pop(0)
+
+ # Definitely pick one tensor to add
+ y += y * 0.000001
+
+ # Possibly add other tensors
+ nprev = len(prev_tensors)
+ if nprev == 2:
+ y += randchoice[depth].mul(torch.cat(prev_tensors)).sum()
+
+ # triggers graph deletion
+ del x
+
+ # Should not stack overflow
+ scope()
+
+ def test_free_deep_graph_pyfunction(self):
+ class MyOp(Function):
+ @staticmethod
+ def forward(ctx, tensor1, tensor2):
+ return tensor1 + tensor2
+
+ @staticmethod
+ def backward(ctx, grad_output):
+ return grad_output, grad_output
+
+ def scope():
+ depth = 150000
+ x = torch.randn(1, requires_grad=True)
+ y = x.clone()
+
+ # build deeply nested computation graph
+ for i in range(depth):
+ y = MyOp.apply(y, y)
+
+ # triggers graph deletion
+ del x
+
+ # Should not stack overflow
+ scope()
+
def test_no_grad(self):
x = torch.ones(5, 5, requires_grad=True)
y = Variable(torch.ones(5, 5) * 4)
| Freeing a deep computation graph causes a stack overflow
The code below will segfault due to a stack overflow. If the computation graph is very linear, like the below, such that freeing a Function leads to the immediate free of another Function, a stack overflow can occur.
This stack overflow occurs some ~16% earlier on master than on v0.3.1.
```
import torch
from torch.autograd import Variable
def test(n):
# In the second of two loops, the computation graph from the first is freed
for j in range(0, 2):
x = Variable(torch.FloatTensor(range(9)), requires_grad=True)
time_step = 0.002
y = x.clone()
# build deeply nested computation graph
for i in range(n):
y = y + y*time_step
print('Loop: {}'.format(j))
# Smallest n such that test(n) causes a stack overflow
# On 0.3.1 from conda install
test(65461)
# On master
test(52330)
```
I'm looking into fixing it.
| To iterate is human, to recurse divine
:)
Meh, that's just how `shared_ptr`s work. We could do something like Python (basically limit the recursion depth, and accumulate remaining objects in a list once you hit the limit), but we'll need to be extra careful to construct all our pointers with the custom deleter. Shouldn't be too bad for perf if we use thread-local to-delete queues and recursion depth counters. | 2018-04-23T19:10:56 |
pytorch/pytorch | 6,982 | pytorch__pytorch-6982 | [
"6917"
] | 24d05662ead139883ac0d9545accc862196c3c75 | diff --git a/torch/utils/data/sampler.py b/torch/utils/data/sampler.py
--- a/torch/utils/data/sampler.py
+++ b/torch/utils/data/sampler.py
@@ -120,7 +120,7 @@ class BatchSampler(object):
def __init__(self, sampler, batch_size, drop_last):
if not isinstance(sampler, Sampler):
raise ValueError("sampler should be an instance of "
- "torch.utils.data.Sampler, but got sampler={}"
+ "torch.utils.data.sampler.Sampler, but got sampler={}"
.format(sampler))
if not isinstance(batch_size, _int_classes) or isinstance(batch_size, bool) or \
batch_size <= 0:
| Sampler not exposed to torch.utils.data
## Issue description
class Sampler is not exposed to torch.utils.data, which is not consistent with the error information.
https://github.com/pytorch/pytorch/blob/master/torch/utils/data/sampler.py#L123
## Code example
```
>>> import torch
>>> torch.utils.data.Sampler
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'module' object has no attribute 'Sampler'
>>> torch.utils.data.sampler.Sampler
```
| 2018-04-26T07:22:34 |
||
pytorch/pytorch | 7,015 | pytorch__pytorch-7015 | [
"6987"
] | 902579602bc8ec8e884fe6501205008d3da629a4 | diff --git a/torch/utils/cpp_extension.py b/torch/utils/cpp_extension.py
--- a/torch/utils/cpp_extension.py
+++ b/torch/utils/cpp_extension.py
@@ -475,6 +475,7 @@ def load(name,
if baton.try_acquire():
try:
+ check_compiler_abi_compatibility(os.environ.get('CXX', 'c++'))
with_cuda = any(map(_is_cuda_file, sources))
extra_ldflags = _prepare_ldflags(
extra_ldflags or [],
| Segmentation fault with cpp_extensions example
## Issue description
Running the C++ extensions example from the Pytorch 0.4.0 release notes results in a segmentation fault.
## Code example
### C++ code
(Code was copied directly from the release notes)
```c++
// my_implementation.cpp
#include <torch/torch.h>
#include <unordered_set>
// can use templates as well. But let's keep it
// simple
using scalar_t = float;
at::Tensor unique_float(at::Tensor input_) {
// only works for floats
AT_ASSERT(input_.type().scalarType() == at::ScalarType::Float, "input must be a float tensor");
// and CPU tensors
AT_ASSERT(!input_.type().is_cuda(), "input must be a CPU tensor");
// make the input contiguous, to simplify the implementation
at::Tensor input = input_.contiguous();
// get the pointer that holds the data
scalar_t* input_data = input.data<scalar_t>();
// let's use a function from the std library to implement
// the unique function
std::unordered_set<scalar_t> set(input_data, input_data + input.numel());
// create the output tensor, with size set.size()
at::Tensor output = input.type().tensor({static_cast<int64_t>(set.size())});
scalar_t* output_data = output.data<scalar_t>();
// copy the content of the set to the output tensor
std::copy(set.begin(), set.end(), output_data);
return output;
}
// this defines the functions exposed to Python
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("unique_float", &unique_float, "Unique for float tensors");
}
```
### Python code
```python
# test_ext.py
import torch
from torch.utils.cpp_extension import load as load_ext
# pass the source files, they will be compiled on the fly
# and will return a python module
_C = load_ext('my_unique_lib', sources=['my_implementation.cpp'])
# now can use the functions implemented in C++
unique = _C.unique_float
a = torch.tensor([1.0, 2.0, 1.0])
print(unique(a))
```
### Stack trace from GDB
```
Program received signal SIGSEGV, Segmentation fault.
#0 0x00007fffabdea0ae in void __gnu_cxx::new_allocator<_object*>::construct<_object*, _object*>(_object**, _object*&&) () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#1 0x00007fffabde69aa in std::enable_if<std::allocator_traits<std::allocator<_object*> >::__construct_helper<_object*, _object*>::value, void>::type std::allocator_traits<std::allocator<_object*> >::_S_construct<_object*, _object*>(std::allocator<_object*>&, _object**, _object*&&) () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#2 0x00007fffabde21c6 in decltype (_S_construct({parm#1}, {parm#2}, (forward<_object*>)({parm#3}))) std::allocator_traits<std::allocator<_object*> >::construct<_object*, _object*>(std::allocator<_object*>&, _object**, _object*&&) () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#3 0x00007fffabdde355 in void std::vector<_object*, std::allocator<_object*> >::emplace_back<_object*>(_object*&&) () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#4 0x00007fffabddaa8a in std::vector<_object*, std::allocator<_object*> >::push_back(_object*&&) () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#5 0x00007fffabdd155d in pybind11::detail::loader_life_support::loader_life_support() () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#6 0x00007fffabdd6c99 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /tmp/torch_extensions/my_unique_lib/my_unique_lib.so
#7 0x00007ffff7997902 in _PyCFunction_FastCallDict (func_obj=0x7fffac35d360, args=0x7ffff7f75ba8, nargs=<optimised out>, kwargs=0x0) at Objects/methodobject.c:231
#8 0x00007ffff7a1cf4c in call_function (pp_stack=0x7fffffffdaa8, oparg=<optimised out>, kwnames=0x0) at Python/ceval.c:4788
#9 0x00007ffff7a1fbbd in _PyEval_EvalFrameDefault (f=<optimised out>, throwflag=<optimised out>) at Python/ceval.c:3275
#10 0x00007ffff7a1b4c0 in _PyEval_EvalCodeWithName (_co=0x7ffff7f018a0, globals=<optimised out>, locals=<optimised out>, args=<optimised out>, argcount=0, kwnames=0x0, kwargs=0x8,
kwcount=0, kwstep=2, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x0, qualname=0x0) at Python/ceval.c:4119
#11 0x00007ffff7a1b943 in PyEval_EvalCodeEx (_co=<optimised out>, globals=<optimised out>, locals=<optimised out>, args=<optimised out>, argcount=<optimised out>, kws=<optimised out>,
kwcount=0, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0) at Python/ceval.c:4140
#12 0x00007ffff7a1b98b in PyEval_EvalCode (co=<optimised out>, globals=<optimised out>, locals=<optimised out>) at Python/ceval.c:695
#13 0x00007ffff7a4e100 in run_mod (arena=0x7ffff7f5f2d0, flags=0x7fffffffde00, locals=0x7ffff7f46090, globals=0x7ffff7f46090, filename=0x7ffff66c1bb0, mod=0x6a70a0)
---Type <return> to continue, or q <return> to quit---
at Python/pythonrun.c:980
#14 PyRun_FileExFlags (fp=0x6926c0, filename_str=<optimised out>, start=<optimised out>, globals=0x7ffff7f46090, locals=0x7ffff7f46090, closeit=<optimised out>, flags=0x7fffffffde00)
at Python/pythonrun.c:933
#15 0x00007ffff7a4f6f3 in PyRun_SimpleFileExFlags (fp=0x6926c0, filename=<optimised out>, closeit=1, flags=0x7fffffffde00) at Python/pythonrun.c:396
#16 0x00007ffff7a6aa41 in run_file (p_cf=0x7fffffffde00, filename=0x6032d0 L"test_ext.py", fp=0x6926c0) at Modules/main.c:320
#17 Py_Main (argc=<optimised out>, argv=<optimised out>) at Modules/main.c:781
#18 0x0000000000400c1d in main (argc=2, argv=<optimised out>) at ./Programs/python.c:69
```
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 8.0.61
OS: Ubuntu 14.04.5 LTS
GCC version: (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
CMake version: version 3.2.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 8.0.44
GPU models and configuration: GPU 0: GeForce GTX 980
Nvidia driver version: 384.111
cuDNN version: Probably one of the following:
/usr/local/cuda-8.0/lib64/libcudnn.so.5.1.10
/usr/local/cuda-8.0/lib64/libcudnn.so.5.1.5
/usr/local/cuda-8.0/lib64/libcudnn_static.a
/usr/local/lib/python2.7/dist-packages/torch/lib/libcudnn-900fef33.so.7.0.5
Versions of relevant libraries:
[pip3] numpy (1.14.2)
[pip3] numpydoc (0.6.0)
[pip3] torch (0.4.0)
[pip3] torchvision (0.2.0, ~/packages/vision)
[conda] cuda80 1.0 0 soumith
[conda] torch 0.4.0 <pip>
[conda] torchvision 0.2.0 <pip>
| It seems that you are using GCC 4.8
Didn't you get the following message from `cpp_extensions`?
```
Your compiler (g++ 4.8) may be ABI-incompatible with PyTorch!
Please use a compiler that is ABI-compatible with GCC 4.9 and above.
See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.
See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6
for instructions on how to install GCC 4.9 or higher.
```
Could you try updating gcc to 4.9 or higher?
Hi @fmassa! Upgrading gcc and then reinstalling Pytorch seemed to fix it! Many thanks. No sign of the error message though, when should I have expected to see it? When installing Pytorch or when running the example?
Hum, I just realized that the JIT compilation step doesn't emit the warning, only when we compile it using `setuptools.Extension`. The `abi` check should also be present in the `load` function call if possible.
I'm keeping this issue open until we add the abi checks in `load` as well.
cc @goldsborough | 2018-04-26T23:06:47 |
|
pytorch/pytorch | 7,059 | pytorch__pytorch-7059 | [
"7012"
] | 281f095972cbd8b458909393dadf57046ebcea25 | diff --git a/torch/utils/cpp_extension.py b/torch/utils/cpp_extension.py
--- a/torch/utils/cpp_extension.py
+++ b/torch/utils/cpp_extension.py
@@ -461,20 +461,119 @@ def load(name,
extra_cflags=['-O2'],
verbose=True)
'''
+ return _jit_compile(
+ name,
+ [sources] if isinstance(sources, str) else sources,
+ extra_cflags,
+ extra_cuda_cflags,
+ extra_ldflags,
+ extra_include_paths,
+ build_directory or _get_build_directory(name, verbose),
+ verbose)
+
+
+def load_inline(name,
+ cpp_sources,
+ cuda_sources=None,
+ functions=None,
+ extra_cflags=None,
+ extra_cuda_cflags=None,
+ extra_ldflags=None,
+ extra_include_paths=None,
+ build_directory=None,
+ verbose=False):
+ '''
+ Loads a PyTorch C++ extension just-in-time (JIT) from string sources.
- verify_ninja_availability()
-
- # Allows sources to be a single path or a list of paths.
- if isinstance(sources, str):
- sources = [sources]
-
- if build_directory is None:
- build_directory = _get_build_directory(name, verbose)
+ This function behaves exactly like :func:`load`, but takes its sources as
+ strings rather than filenames. These strings are stored to files in the
+ build directory, after which the behavior of :func:`load_inline` is
+ identical to :func:`load`. Strings passed in ``cpp_sources`` (a string or
+ list of strings) are stored with a ``.cpp`` extension, and the string or list
+ of strings passed in ``cuda_sources`` are stored with a ``.cu`` extension.
+ Example:
+ >>> from torch.utils.cpp_extension import load_inline
+ >>> source = \'\'\'
+ at::Tensor sin_add(at::Tensor x, at::Tensor y) {
+ return x.sin() + y.sin();
+ }
+ \'\'\'
+ >>> module = load_inline(name='inline_extension',
+ cpp_sources=[source],
+ functions=['sin_add'])
+ '''
+ build_directory = build_directory or _get_build_directory(name, verbose)
+
+ source_files = []
+
+ if isinstance(cpp_sources, str):
+ cpp_sources = [cpp_sources]
+ cuda_sources = cuda_sources or []
+ if isinstance(cuda_sources, str):
+ cuda_sources = [cuda_sources]
+
+ cpp_sources.insert(0, '#include <torch/torch.h>')
+
+ # If `functions` is supplied, we create the pybind11 bindings for the user.
+ # Here, `functions` is (or becomes, after some processing) a map from
+ # function names to function docstrings.
+ if functions is not None:
+ cpp_sources.append('PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {')
+ if isinstance(functions, str):
+ functions = [functions]
+ if isinstance(functions, list):
+ # Make the function docstring the same as the function name.
+ functions = dict((f, f) for f in functions)
+ elif not isinstance(functions, dict):
+ raise ValueError(
+ "Expected 'functions' to be a list or dict, but was {}".format(
+ type(functions)))
+ for function_name, docstring in functions.items():
+ cpp_sources.append('m.def("{0}", &{0}, "{1}");'.format(
+ function_name, docstring))
+ cpp_sources.append('}')
+
+ cpp_source_path = os.path.join(build_directory, 'main.cpp')
+ with open(cpp_source_path, 'w') as cpp_source_file:
+ cpp_source_file.write('\n'.join(cpp_sources))
+
+ sources = [cpp_source_path]
+
+ if cuda_sources:
+ cuda_sources.insert(0, '#include <ATen/ATen.h>')
+ cuda_sources.insert(1, '#include <cuda.h>')
+ cuda_sources.insert(2, '#include <cuda_runtime.h>')
+
+ cuda_source_path = os.path.join(build_directory, 'cuda.cu')
+ with open(cuda_source_path, 'w') as cuda_source_file:
+ cuda_source_file.write('\n'.join(cuda_sources))
+
+ sources.append(cuda_source_path)
+
+ return _jit_compile(
+ name,
+ sources,
+ extra_cflags,
+ extra_cuda_cflags,
+ extra_ldflags,
+ extra_include_paths,
+ build_directory,
+ verbose)
+
+
+def _jit_compile(name,
+ sources,
+ extra_cflags,
+ extra_cuda_cflags,
+ extra_ldflags,
+ extra_include_paths,
+ build_directory,
+ verbose):
baton = FileBaton(os.path.join(build_directory, 'lock'))
-
if baton.try_acquire():
try:
+ verify_ninja_availability()
check_compiler_abi_compatibility(os.environ.get('CXX', 'c++'))
with_cuda = any(map(_is_cuda_file, sources))
extra_ldflags = _prepare_ldflags(
| diff --git a/test/test_cpp_extensions.py b/test/test_cpp_extensions.py
--- a/test/test_cpp_extensions.py
+++ b/test/test_cpp_extensions.py
@@ -72,8 +72,8 @@ def test_jit_compile_extension(self):
def test_cuda_extension(self):
import torch_test_cuda_extension as cuda_extension
- x = torch.FloatTensor(100).zero_().cuda()
- y = torch.FloatTensor(100).zero_().cuda()
+ x = torch.zeros(100, device='cuda', dtype=torch.float32)
+ y = torch.zeros(100, device='cuda', dtype=torch.float32)
z = cuda_extension.sigmoid_add(x, y).cpu()
@@ -92,8 +92,8 @@ def test_jit_cuda_extension(self):
extra_cuda_cflags=['-O2'],
verbose=True)
- x = torch.FloatTensor(100).zero_().cuda()
- y = torch.FloatTensor(100).zero_().cuda()
+ x = torch.zeros(100, device='cuda', dtype=torch.float32)
+ y = torch.zeros(100, device='cuda', dtype=torch.float32)
z = module.sigmoid_add(x, y).cpu()
@@ -106,6 +106,111 @@ def test_optional(self):
has_value = cpp_extension.function_taking_optional(None)
self.assertFalse(has_value)
+ def test_inline_jit_compile_extension_with_functions_as_list(self):
+ cpp_source = '''
+ at::Tensor tanh_add(at::Tensor x, at::Tensor y) {
+ return x.tanh() + y.tanh();
+ }
+ '''
+
+ module = torch.utils.cpp_extension.load_inline(
+ name='inline_jit_extension_with_functions_list',
+ cpp_sources=cpp_source,
+ functions='tanh_add',
+ verbose=True)
+
+ self.assertEqual(module.tanh_add.__doc__.split('\n')[2], 'tanh_add')
+
+ x = torch.randn(4, 4)
+ y = torch.randn(4, 4)
+
+ z = module.tanh_add(x, y)
+ self.assertEqual(z, x.tanh() + y.tanh())
+
+ def test_inline_jit_compile_extension_with_functions_as_dict(self):
+ cpp_source = '''
+ at::Tensor tanh_add(at::Tensor x, at::Tensor y) {
+ return x.tanh() + y.tanh();
+ }
+ '''
+
+ module = torch.utils.cpp_extension.load_inline(
+ name='inline_jit_extension_with_functions_dict',
+ cpp_sources=cpp_source,
+ functions={'tanh_add': 'Tanh and then sum :D'},
+ verbose=True)
+
+ self.assertEqual(
+ module.tanh_add.__doc__.split('\n')[2], 'Tanh and then sum :D')
+
+ def test_inline_jit_compile_extension_multiple_sources_and_no_functions(self):
+ cpp_source1 = '''
+ at::Tensor sin_add(at::Tensor x, at::Tensor y) {
+ return x.sin() + y.sin();
+ }
+ '''
+
+ cpp_source2 = '''
+ #include <torch/torch.h>
+ at::Tensor sin_add(at::Tensor x, at::Tensor y);
+ PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("sin_add", &sin_add, "sin(x) + sin(y)");
+ }
+ '''
+
+ module = torch.utils.cpp_extension.load_inline(
+ name='inline_jit_extension',
+ cpp_sources=[cpp_source1, cpp_source2],
+ verbose=True)
+
+ x = torch.randn(4, 4)
+ y = torch.randn(4, 4)
+
+ z = module.sin_add(x, y)
+ self.assertEqual(z, x.sin() + y.sin())
+
+ @unittest.skipIf(not TEST_CUDA, "CUDA not found")
+ def test_inline_jit_compile_extension_cuda(self):
+ cuda_source = '''
+ __global__ void cos_add_kernel(
+ const float* __restrict__ x,
+ const float* __restrict__ y,
+ float* __restrict__ output,
+ const int size) {
+ const auto index = blockIdx.x * blockDim.x + threadIdx.x;
+ if (index < size) {
+ output[index] = __cosf(x[index]) + __cosf(y[index]);
+ }
+ }
+
+ at::Tensor cos_add(at::Tensor x, at::Tensor y) {
+ auto output = at::zeros_like(x);
+ const int threads = 1024;
+ const int blocks = (output.numel() + threads - 1) / threads;
+ cos_add_kernel<<<blocks, threads>>>(x.data<float>(), y.data<float>(), output.data<float>(), output.numel());
+ return output;
+ }
+ '''
+
+ # Here, the C++ source need only declare the function signature.
+ cpp_source = 'at::Tensor cos_add(at::Tensor x, at::Tensor y);'
+
+ module = torch.utils.cpp_extension.load_inline(
+ name='inline_jit_extension_cuda',
+ cpp_sources=cpp_source,
+ cuda_sources=cuda_source,
+ functions=['cos_add'],
+ verbose=True)
+
+ self.assertEqual(module.cos_add.__doc__.split('\n')[2], 'cos_add')
+
+ x = torch.randn(4, 4, device='cuda', dtype=torch.float32)
+ y = torch.randn(4, 4, device='cuda', dtype=torch.float32)
+
+ z = module.cos_add(x, y)
+ self.assertEqual(z, x.cos() + y.cos())
-if __name__ == '__main__':
- common.run_tests()
+ def test_inline_jit_compile_extension_throws_when_functions_is_bad(self):
+ with self.assertRaises(ValueError):
+ torch.utils.cpp_extension.load_inline(
+ name='invalid_jit_extension', cpp_sources='', functions=5)
| [feature request] [pytorch] Support inline cpp/cuda JIT extensions
If I understand correctly, currently the JIT variant requires the C++/CUDA files to live on disk. It would be cool to support compiling C++/CUDA code from a string variable from a python variable, like in nvrtc/cupy: https://github.com/vadimkantorov/caffemodel2pytorch (ROI pooling example).
This would reduce boiler-plate files and improve code clarity for small CUDA functions.
| That should be really easy to do by wrapping the JIT wariant. wdyt @goldsborough?
Yeah that's a good idea, I'll throw it in my bucket π
from cupy usage I remember that specifying headers had to be via a special method argument to please nvrtc. it would be cool if the headers may be determined automatically from the code. but maybe this is a non-issue.
if there is only a cuda function, will the gcc compiler still be called? (i remember in discussion there was something that cpp extensions must be compiled with the same version of gcc as the pytorch itself which can be problematic if pytorch is installed as a pre-built package)
We don't use nvrtc, we use gcc for C++ files and nvcc for CUDA files. To be clear, the way I would implement this features is by dumping the inline function into a file and then calling `cpp_extension.load()` on it :)
Also, nvcc cannot compile pybind11, so there will have to be a C++/gcc part somewhere, yes. | 2018-04-28T00:46:57 |
pytorch/pytorch | 7,182 | pytorch__pytorch-7182 | [
"7095"
] | e6330559c8769d4f9bfe4e1a11301a8cbfd63081 | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1028,7 +1028,7 @@ def add_docstr_all(method, docstr):
item() -> number
Returns the value of this tensor as a standard Python number. This only works
-for tensors with one element.
+for tensors with one element. For other cases, see :meth:`~Tensor.tolist`.
This operation is not differentiable.
@@ -2107,6 +2107,26 @@ def callable(a, b) -> number
In-place version of :meth:`~Tensor.tanh`
""")
+add_docstr_all('tolist',
+ r""""
+tolist() -> list or number
+
+Returns the tensor as a (nested) list. For scalars, a standard
+Python number is returned, just like with :meth:`~Tensor.item`.
+Tensors are automatically moved to the CPU first if necessary.
+
+This operation is not differentiable.
+
+Examples::
+
+ >>> a = torch.randn(2, 2)
+ >>> a.tolist()
+ [[0.012766935862600803, 0.5415473580360413],
+ [-0.08909505605697632, 0.7729271650314331]]
+ >>> a[0,0].tolist()
+ 0.012766935862600803
+""")
+
add_docstr_all('topk',
r"""
topk(k, dim=None, largest=True, sorted=True) -> (Tensor, LongTensor)
| [PyTorch] tensor.tolist has no docstring
| 2018-05-02T17:47:30 |
||
pytorch/pytorch | 7,189 | pytorch__pytorch-7189 | [
"7175"
] | 88a705555ae88d2f3302b6227010b82a857b8eb1 | diff --git a/torch/_tensor_str.py b/torch/_tensor_str.py
--- a/torch/_tensor_str.py
+++ b/torch/_tensor_str.py
@@ -168,7 +168,7 @@ def _vector_str(self, indent, fmt, scale, sz, summarize):
[' ...'] +
[fmt(val / scale) for val in self[-PRINT_OPTS.edgeitems:].tolist()])
else:
- data = [fmt(val) for val in self.tolist()]
+ data = [fmt(val / scale) for val in self.tolist()]
data_lines = [data[i:i + elements_per_line] for i in range(0, len(data), elements_per_line)]
lines = [', '.join(line) for line in data_lines]
| repr of tensor does't rescale values
Printing a float tensor with < 1 values factors out a common multiplicative power.
But in some cases, it forgets to rescale the values that are printed to take into account the factored value:
```python
torch.tensor(0.05524)
# this is right
# tensor(1.00000e-02 *
# 5.5240)
# but
torch.tensor([0.05524])
# prints
# tensor(1.00000e-02 *
# [ 0.0552])
```
Also, I think we should be a bit more strict in the cases we want to print the common multiplicative factor (in those cases, we are more verbose and still lose information). Numpy doesn't factor out the aforementioned cases
```python
np.array(0.05224)
# prints array(0.05224)
np.array([0.05224])
# prints array([0.05224])
```
I'm using `torch.__version__ = '0.5.0a0+4caea64'`
| CC @li-roy | 2018-05-02T18:45:33 |
|
pytorch/pytorch | 7,254 | pytorch__pytorch-7254 | [
"2591"
] | 833b1e6c74c5945883cb0622236515b5c83a288a | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1143,6 +1143,13 @@ def add_docstr_all(method, docstr):
f(x) = \\dfrac{1}{x \\sigma \\sqrt{2\\pi}}\ e^{-\\dfrac{(\\ln x - \\mu)^2}{2\\sigma^2}}
""")
+add_docstr_all('logsumexp',
+ r"""
+logsumexp(dim, keepdim=False) -> Tensor
+
+See :func:`torch.logsumexp`
+""")
+
add_docstr_all('lt',
r"""
lt(other) -> Tensor
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -2154,6 +2154,36 @@ def parse_kwargs(desc):
tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])
""".format(**factory_common_args))
+add_docstr(torch.logsumexp,
+ r"""
+logsumexp(input, dim, keepdim=False, out=None)
+
+Returns the log of summed exponentials of each row of the :attr:`input`
+tensor in the given dimension :attr:`dim`. The computation is numerically
+stabilized.
+
+For summation index :math:`j` given by `dim` and other indices :math:`i`, the result is
+
+ :math:`\text{logsumexp}(x)_{i} = \log \sum_j \exp(x_ij).`
+
+If :attr:`keepdim` is ``True``, the output tensor is of the same size
+as :attr:`input` except in the dimension :attr:`dim` where it is of size 1.
+Otherwise, :attr:`dim` is squeezed (see :func:`torch.squeeze`), resulting in
+the output tensor having 1 fewer dimension than :attr:`input`.
+
+Args:
+ input (Tensor): the input tensor
+ dim (int or tuple of ints): the dimension or dimensions to reduce
+ keepdim (bool): whether the output tensor has :attr:`dim` retained or not
+ out (Tensor, optional): the output tensor
+
+
+Example::
+ >>> a = torch.randn(3, 3)
+ >>> torch.logsumexp(a, 1)
+ tensor([ 0.8442, 1.4322, 0.8711])
+""")
+
add_docstr(torch.lt,
r"""
lt(input, other, out=None) -> Tensor
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2738,6 +2738,7 @@ class dont_convert(tuple):
('addcdiv', (), (0.5, (S, S, 1), (1, S)), 'scalar_scale_broadcast_lhs'),
('zero_', (S, S, S), NO_ARGS),
('zero_', (), NO_ARGS, 'scalar'),
+ ('logsumexp', (S, S), (1,)),
('norm', (S, S), (2,)),
('norm', (S, S), (0,), '0'),
('norm', (S, S), (0.5,), '0_5'),
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -766,6 +766,17 @@ def test_multidim(x, dim):
def test_dim_reduction(self):
self._test_dim_reduction(self, lambda t: t)
+ @unittest.skipIf(not TEST_SCIPY, "Scipy not found")
+ def test_logsumexp(self):
+ from scipy.special import logsumexp
+ a = torch.randn(5, 4)
+ a[0, 0] = float('inf')
+ a[1, :] = float('-inf')
+ actual = a.logsumexp(1)
+ expected = logsumexp(a.numpy(), 1)
+ self.assertEqual(expected.shape, actual.shape)
+ self.assertTrue(np.allclose(expected, actual.numpy()))
+
@unittest.skipIf(not TEST_NUMPY, "Numpy not found")
def test_cpu_parallel(self):
# To use parallel branches we'll need to compare on tensors
| Feature request: logsumexp
The numerically stable version of `logsumexp` is simple but often useful. Thoughts on including this in PyTorch?
My thought is that it'd follow a call signature(s) that's similar to that of `sum`, but `logsumexp` wouldn't have the `out` keyword, and it'd also use the NumPy-consistent `keepdims` keyword instead of `sum`'s `keepdim` keyword. So
```
def logsumexp(inputs, dim=None, keepdims=False):
...
```
| Just a couple of questions, might be extremely naive:
- Is this supposed to be differentiable? As in, if it is only going to be used in the `forward()` pass, then is a library function needed? I cannot think of a scenario where I might want to call `backward()` on this, so I'm asking if that is a concern.
- Why `keepdims` instead of `keepdim`? Let's keep the notation consistent across PyTorch!
Yes it should be differentiable :). Many use cases exist, one being mixture density networks.
I'm all for consistency. My best guess is that PyTorch accidentally used keepdim, but that they will strive for numpy consistency in the future. Same happened with TensorFlow. So my thought is that any new functionality should meet this goal in advance.
We used keepdim because reduction functions in pytorch only support a single dimension. There are plans/patches to change that, after which keepdims would be more appropriate.
Assuming that `log_softmax` is numerically stable, here is a one-line workaround:
```python
import torch.nn.functional as F
def logsumexp(inputs, dim=None, keepdim=False):
return (inputs - F.log_softmax(inputs)).mean(dim, keepdim=keepdim)
```
Perhaps not very efficient for large sums, but it delegates everything to `log_softmax` as well as replicates its behavior (#1020). Having a native implementation down the line would be nice.
Here is an implementation that we wrote, which supports both variable and tensor inputs:
```python
def log_sum_exp(value, dim=None, keepdim=False):
"""Numerically stable implementation of the operation
value.exp().sum(dim, keepdim).log()
"""
# TODO: torch.max(value, dim=None) threw an error at time of writing
if dim is not None:
m, _ = torch.max(value, dim=dim, keepdim=True)
value0 = value - m
if keepdim is False:
m = m.squeeze(dim)
return m + torch.log(torch.sum(torch.exp(value0),
dim=dim, keepdim=keepdim))
else:
m = torch.max(value)
sum_exp = torch.sum(torch.exp(value - m))
if isinstance(sum_exp, Number):
return m + math.log(sum_exp)
else:
return m + torch.log(sum_exp)
```
I agree that an implementation in C would be very helpful!
I'm looking for this function as well. I'm implementing the CTC loss, and need to add lots of log probabilities.
+1. Additionally I have a situation where I need row-wise logsumexp on a sparse matrix (treating unstored entries as if they are -inf rather than zero). I can't find any way to construct this to be done efficiently on the GPU.
Throwing my version here too. Not sure if there are any tradeoffs with the version above, though the name is consistent with scipy's `logsumexp`, it's a bit shorter, and `dim=None` works.
```
def logsumexp(inputs, dim=None, keepdim=False):
"""Numerically stable logsumexp.
Args:
inputs: A Variable with any shape.
dim: An integer.
keepdim: A boolean.
Returns:
Equivalent of log(sum(exp(inputs), dim=dim, keepdim=keepdim)).
"""
# For a 1-D array x (any array along a single dimension),
# log sum exp(x) = s + log sum exp(x - s)
# with s = max(x) being a common choice.
if dim is None:
inputs = inputs.view(-1)
dim = 0
s, _ = torch.max(inputs, dim=dim, keepdim=True)
outputs = s + (inputs - s).exp().sum(dim=dim, keepdim=True).log()
if not keepdim:
outputs = outputs.squeeze(dim)
return outputs
```
Yet another version that handles `inf` and `-inf` cases properly so that no `nan` appears. Requires `torch.where` from nightly build or work-around for version 0.3:
```python
# https://github.com/pytorch/pytorch/issues/2591
import torch
def logsumexp(x, dim=None, keepdim=False):
if dim is None:
x, dim = x.view(-1), 0
xm, _ = torch.max(x, dim, keepdim=True)
x = torch.where(
(xm == float('inf')) | (xm == float('-inf')),
xm,
xm + torch.log(torch.sum(torch.exp(x - xm), dim, keepdim=True)))
return x if keepdim else x.squeeze(dim)
```
A possible `torch.where` work-around for version 0.3 which does not handle broadcasting:
```python
def where(cond, xt, xf):
ret = torch.zeros_like(xt)
ret[cond] = xt[cond]
ret[cond ^ 1] = xf[cond ^ 1]
return ret
```
| 2018-05-03T18:31:39 |
pytorch/pytorch | 7,298 | pytorch__pytorch-7298 | [
"7261"
] | 833b1e6c74c5945883cb0622236515b5c83a288a | diff --git a/torch/nn/utils/spectral_norm.py b/torch/nn/utils/spectral_norm.py
--- a/torch/nn/utils/spectral_norm.py
+++ b/torch/nn/utils/spectral_norm.py
@@ -14,32 +14,34 @@ def __init__(self, name='weight', n_power_iterations=1, eps=1e-12):
self.eps = eps
def compute_weight(self, module):
- weight = module._parameters[self.name + '_org']
- u = module._buffers[self.name + '_u']
+ weight = getattr(module, self.name + '_org')
+ u = getattr(module, self.name + '_u')
height = weight.size(0)
weight_mat = weight.view(height, -1)
- for _ in range(self.n_power_iterations):
- # Spectral norm of weight equals to `u^T W v`, where `u` and `v`
- # are the first left and right singular vectors.
- # This power iteration produces approximations of `u` and `v`.
- v = normalize(torch.matmul(weight_mat.t(), u), dim=0, eps=self.eps)
- u = normalize(torch.matmul(weight_mat, v), dim=0, eps=self.eps)
-
- sigma = torch.dot(u, torch.matmul(weight_mat, v))
- weight.data /= sigma
+ with torch.no_grad():
+ for _ in range(self.n_power_iterations):
+ # Spectral norm of weight equals to `u^T W v`, where `u` and `v`
+ # are the first left and right singular vectors.
+ # This power iteration produces approximations of `u` and `v`.
+ v = normalize(torch.matmul(weight_mat.t(), u), dim=0, eps=self.eps)
+ u = normalize(torch.matmul(weight_mat, v), dim=0, eps=self.eps)
+
+ sigma = torch.dot(u, torch.matmul(weight_mat, v))
+ weight = weight / sigma
return weight, u
def remove(self, module):
weight = module._parameters[self.name + '_org']
- del module._parameters[self.name]
- del module._buffers[self.name + '_u']
- del module._parameters[self.name + '_org']
+ delattr(module, self.name)
+ delattr(module, self.name + '_u')
+ delattr(module, self.name + '_org')
module.register_parameter(self.name, weight)
def __call__(self, module, inputs):
weight, u = self.compute_weight(module)
setattr(module, self.name, weight)
- setattr(module, self.name + '_u', u)
+ with torch.no_grad():
+ getattr(module, self.name).copy_(weight)
@staticmethod
def apply(module, name, n_power_iterations, eps):
@@ -48,7 +50,9 @@ def apply(module, name, n_power_iterations, eps):
height = weight.size(0)
u = normalize(weight.new_empty(height).normal_(0, 1), dim=0, eps=fn.eps)
+ delattr(module, fn.name)
module.register_parameter(fn.name + "_org", weight)
+ module.register_buffer(fn.name, weight)
module.register_buffer(fn.name + "_u", u)
module.register_forward_pre_hook(fn)
| Possible memory leak in spectral_norm
I wanted to give a try to spectral_norm function recently included in master through commit ba046331e8cfae7c93a86a5664fcb5c25f9dbee0
I observe that the same model with spectral_norm() around Conv2d & Linear layers keeps its GPU memory increasing until OOM exception.
Removing spectral_norm() call makes everything run smoothly.
- PyTorch: master
- How you installed PyTorch (conda, pip, source): source
- Build command you used (if compiling from source): python setup.py install
- OS: CentOS Linux release 7.4.1708 (Core)
- PyTorch version: master c96f2624a280ae3e2d9195c35150f6aec85f6a02
- Python version: 3.6.5
- CUDA/cuDNN version: 9.1 / 7.1.3
- GPU models and configuration: V100 x 4
- GCC version (if compiling from source): 4.8
- CMake version: 3.5
| cc @crcrpar
I'm sorry to hear that, but I have used my imple. on ver. 0.3.1 w/o such problems.
Of course the imple. which I have used is a little bit different from the one on master branch due to the `Tensor-Variable` marriage.
- ubuntu 16.04
- cuda 9.0
- cudnn 7.0
- how i installed: pip
@mfuntowicz can you try to come up with a self-containing small script to repro the issue?
Please, find the script below. The discriminator network is the same that I'm using in my project.
On the same machine described at the beginning of the issue, at each epoch the GPU memory increases by ~2Mb (Nvidia-smi visual stats). If you let this script run for a long time, then it gives you out of memory.
Removing the spectral_norm() calls in _ResidualDownSamplingBlock stabilizes memory consumption.
Let me know if I can do anything else.
Morgan
```python
from argparse import ArgumentParser
import torch
import torch.backends.cudnn as cudnn
import torch.cuda as cuda
from torch.nn import Sequential, Conv2d, ReLU, Linear, Module, AvgPool2d, BatchNorm2d
from torch.nn.functional import binary_cross_entropy_with_logits, avg_pool2d
from torch.nn.utils import spectral_norm
from torch.optim import Adam
__author__ = 'Morgan Funtowicz'
class Flatten(Module):
def forward(self, x):
x = x.view(x.size(0), -1)
return x
class _ResidualDownSamplingBlock(Module):
def __init__(self, n_in, n_out, ksize, stride=1, padding=1):
super().__init__()
self._f = Sequential(
ReLU(),
spectral_norm(Conv2d(n_in, n_out, ksize, stride, padding)),
ReLU(True),
spectral_norm(Conv2d(n_out, n_out, ksize, stride, padding)),
AvgPool2d(2, 2)
)
self._sc = spectral_norm(Conv2d(n_in, n_out, 1, padding=0))
def forward(self, x):
return avg_pool2d(self._sc(x), 2, 2) + self._f(x)
if __name__ == '__main__':
# Ensure Tensor are allocated as FloatTensor
cudnn.benchmark = True
torch.set_default_tensor_type('torch.FloatTensor')
torch.set_default_dtype(torch.float32)
# Parse provided arguments
args_parser = ArgumentParser()
args_parser.add_argument('-d', type=int, default=-1, dest='device', help='Device to use for training (-1 = CPU)')
args_parser.add_argument('-b', type=int, default=-16, dest='batch', help='Size of the minibatch')
args = args_parser.parse_args()
args.gpu = args.device >= 0 and cuda.is_available()
device = torch.device("cuda:%d" % args.device if args.gpu else "cpu")
# Define the model & Optimizer
model = Sequential(
_ResidualDownSamplingBlock(3, 64, ksize=3),
_ResidualDownSamplingBlock(64, 64, ksize=3),
_ResidualDownSamplingBlock(64, 128, ksize=3),
_ResidualDownSamplingBlock(128, 128, ksize=3),
_ResidualDownSamplingBlock(128, 128, ksize=3),
_ResidualDownSamplingBlock(128, 64, ksize=3),
BatchNorm2d(64), ReLU(True), Flatten(),
spectral_norm(Linear(256, 1))
).to(device)
opt = Adam(model.parameters())
# Train
for epoch in range(20000):
print('Starting epoch %d' % epoch)
x, y = torch.randn((args.batch, 3, 128, 128), device=device), torch.rand((args.batch, 1), device=device)
y_hat = model(x)
opt.zero_grad()
loss = binary_cross_entropy_with_logits(y_hat, y)
loss.backward()
opt.step()
```
I think I found a couple of issues with the code and I think I even have a fix. The main thing is that it tries to be too clever with the `_u`. I don't think you actually need gradients for `_u`.
Does the code actually work as expected in terms of calculation?
My (limited) understanding of the spectral norm is that you want the gradient to propagate all the way to the original parameter. The [division by sigma (with .data)](https://github.com/pytorch/pytorch/blob/master/torch/nn/utils/spectral_norm.py#L29) looks wrong to me in this respect.
I'll double check that I have a patch in ~4 hours or so.
| 2018-05-04T19:02:13 |
|
pytorch/pytorch | 7,319 | pytorch__pytorch-7319 | [
"7221"
] | 94b74d2068d9333872caf82f034144cb20dde7a0 | diff --git a/torch/nn/utils/rnn.py b/torch/nn/utils/rnn.py
--- a/torch/nn/utils/rnn.py
+++ b/torch/nn/utils/rnn.py
@@ -85,6 +85,23 @@ def byte(self):
r"""Returns copy with `self.data` cast to byte type"""
return type(self)(self.data.byte(), self.batch_sizes)
+ def to(self, *args, **kwargs):
+ r"""Performs dtype and/or device conversion on `self.data`.
+
+ It has similar signature as :meth:`torch.Tensor.to`.
+
+ .. note::
+
+ If the ``self.data`` Tensor already has the correct :class:`torch.dtype`
+ and :class:`torch.device`, then ``self`` is returned.
+ Otherwise, returns a copy with the desired configuration.
+ """
+ data = self.data.to(*args, **kwargs)
+ if data is self.data:
+ return self
+ else:
+ return type(self)(data, self.batch_sizes)
+
@property
def is_cuda(self):
r"""Returns true if `self.data` stored on a gpu"""
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -161,6 +161,24 @@ def err_fn():
ref_output = torch.cat([no_extra_pad, extra_pad], 0)
self.assertEqual(unpacked, ref_output)
+ def test_to(self):
+ padded, lengths = self._padded_sequence(torch.IntTensor)
+ a = rnn_utils.pack_padded_sequence(padded, lengths).cpu()
+
+ self.assertIs(a, a.to('cpu'))
+ self.assertIs(a, a.to('cpu', dtype=torch.int32))
+ self.assertEqual(a.long(), a.to(torch.int64))
+
+ if torch.cuda.is_available():
+ for cuda in ['cuda', 'cuda:0' if torch.cuda.device_count() == 1 else 'cuda:1']:
+ b = a.cuda(device=cuda)
+ self.assertIs(b, b.to(cuda))
+ self.assertEqual(a, b.to('cpu'))
+ self.assertEqual(b, a.to(cuda))
+ self.assertEqual(a, b.to('cpu', dtype=torch.int32))
+ self.assertIs(b, b.to(dtype=torch.int32))
+ self.assertEqual(b.long(), b.to(dtype=torch.int64))
+
def default_tensor_type(type):
type_str = torch.typename(type)
| PackedSequence is missing `to` method
The following will die as PackedSequence doesn't have `to` method
```(python)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.nn.utils.rnn.pad_sequence(sequences).to(device)
```
So instead one has to do old-fashioned `.cuda()` call.
| 2018-05-05T13:56:46 |
|
pytorch/pytorch | 7,381 | pytorch__pytorch-7381 | [
"7266"
] | 67e7c244794c77d5d415c73d6cc077bf6a67e6e3 | diff --git a/torch/nn/modules/batchnorm.py b/torch/nn/modules/batchnorm.py
--- a/torch/nn/modules/batchnorm.py
+++ b/torch/nn/modules/batchnorm.py
@@ -45,7 +45,7 @@ def reset_parameters(self):
self.bias.data.zero_()
def _check_input_dim(self, input):
- return NotImplemented
+ raise NotImplementedError
def forward(self, input):
self._check_input_dim(input)
diff --git a/torch/nn/modules/instancenorm.py b/torch/nn/modules/instancenorm.py
--- a/torch/nn/modules/instancenorm.py
+++ b/torch/nn/modules/instancenorm.py
@@ -9,7 +9,7 @@ def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=False,
num_features, eps, momentum, affine, track_running_stats)
def _check_input_dim(self, input):
- return NotImplemented
+ raise NotImplementedError
def _load_from_state_dict(self, state_dict, prefix, strict, missing_keys, unexpected_keys, error_msgs):
try:
| return NotImplemented
This is minor and not urgent, but in batch normalization: https://github.com/pytorch/pytorch/blob/76d3c30783d3e808b070f8350d9f102bb2396944/torch/nn/modules/batchnorm.py#L48
It seems to be `raise NotImplementedError`, according to the [python doc](https://docs.python.org/3.7/library/constants.html#NotImplemented).
| @ydawei one never uses the _BatchNorm base class. So we never ran into this :), we should correct it. | 2018-05-08T18:22:28 |
|
pytorch/pytorch | 7,383 | pytorch__pytorch-7383 | [
"7237"
] | 67e7c244794c77d5d415c73d6cc077bf6a67e6e3 | diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py
--- a/torch/nn/modules/loss.py
+++ b/torch/nn/modules/loss.py
@@ -912,7 +912,7 @@ class MultiMarginLoss(_WeightedLoss):
The loss function then becomes:
.. math::
- \text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] - x[i]))^p)}{\text{x.size}(0)}
+ \text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p)}{\text{x.size}(0)}
Args:
p (int, optional): Has a default value of `1`. `1` and `2` are the only
| Minor error in nn.MultiMarginLoss documentation
Hello,
In torch.nn.MultiMarginLoss in the Pytorch documentation : https://pytorch.org/docs/stable/nn.html?highlight=multimarginloss#torch.nn.MultiMarginLoss.
The loss in the case where there is weights is :
\[\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] - x[i]))^p)}{\text{x.size}(0)}\]
and should be
\[\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p)}{\text{x.size}(0)}\]
The + has change to a - between the loss without weight and with weight , which is probably not what is coded.
| 2018-05-08T18:28:13 |
||
pytorch/pytorch | 7,388 | pytorch__pytorch-7388 | [
"7327"
] | cf9913d5690122013a93ae94ff5f0705275febf9 | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -407,7 +407,7 @@ def max_unpool1d(input, indices, kernel_size, stride=None, padding=0,
See :class:`~torch.nn.MaxUnpool1d` for details.
"""
kernel_size = _single(kernel_size)
- stride = _single(stride)
+ stride = _single(stride or kernel_size)
padding = _single(padding)
output_size = _unpool_output_size(input, kernel_size, stride, padding,
output_size)
@@ -421,7 +421,7 @@ def max_unpool2d(input, indices, kernel_size, stride=None, padding=0,
See :class:`~torch.nn.MaxUnpool2d` for details.
"""
kernel_size = _pair(kernel_size)
- stride = _pair(stride)
+ stride = _pair(stride or kernel_size)
padding = _pair(padding)
output_size = _unpool_output_size(input, kernel_size, stride, padding,
output_size)
@@ -435,7 +435,7 @@ def max_unpool3d(input, indices, kernel_size, stride=None, padding=0,
See :class:`~torch.nn.MaxUnpool3d` for details.
"""
kernel_size = _triple(kernel_size)
- stride = _triple(stride)
+ stride = _triple(stride or kernel_size)
padding = _triple(padding)
output_size = _unpool_output_size(input, kernel_size, stride, padding,
output_size)
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -5018,6 +5018,19 @@ def test_eye_only_works_on_2d_inputs(self):
tensor = self._create_random_nd_tensor(dims, size_min=1, size_max=3, as_variable=as_variable)
init.eye_(tensor)
+ def test_max_unpool(self):
+ # Test 1D
+ output, indices = F.max_pool1d(torch.randn([1, 1, 4]), 2, stride=2, return_indices=True)
+ self.assertEqual(F.max_unpool1d(output, indices, 2), F.max_unpool1d(output, indices, 2, stride=2))
+
+ # Test 2D
+ output, indices = F.max_pool2d(torch.randn([1, 1, 4, 4]), 2, stride=2, return_indices=True)
+ self.assertEqual(F.max_unpool2d(output, indices, 2), F.max_unpool2d(output, indices, 2, stride=2))
+
+ # Test 3D
+ output, indices = F.max_pool3d(torch.randn([4, 4, 4, 4, 4]), 2, stride=2, return_indices=True)
+ self.assertEqual(F.max_unpool3d(output, indices, 2), F.max_unpool3d(output, indices, 2, stride=2))
+
def test_dirac_properties(self):
for as_variable in [True, False]:
for dims in [3, 4, 5]:
| Possible error in max_unpool1d in module Functional
Hi,
```
def max_unpool1d(input, indices, kernel_size, stride=None, padding=0,
output_size=None):
r"""Computes a partial inverse of :class:`MaxPool1d`.
See :class:`~torch.nn.MaxUnpool1d` for details.
"""
kernel_size = _single(kernel_size)
stride = _single(stride)
padding = _single(padding)
output_size = _unpool_output_size(input, kernel_size, stride, padding,
output_size)
return torch._C._nn.max_unpool2d(input.unsqueeze(3), indices.unsqueeze(3), output_size + [1]).squeeze(3)
.....
```
The line:
stride = _single(stride)
should probably read as
stride = _single(stride or kernel_size)
similarly to the code MaxUnpool1d in module nn. Otherwise one gets an error message similar to
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType'
if the "stride" argument is omitted when called.
Gordon
| Thanks for the report! That makes sense, would you like to submit a PR with a fix? | 2018-05-08T20:00:07 |
pytorch/pytorch | 7,392 | pytorch__pytorch-7392 | [
"7320"
] | 64834f6fb8055cf2d633353992288a8823b915ed | diff --git a/aten/src/ATen/gen.py b/aten/src/ATen/gen.py
--- a/aten/src/ATen/gen.py
+++ b/aten/src/ATen/gen.py
@@ -141,7 +141,7 @@ def check_all_files_written(self):
},
'CUDAGenerator.h': {
'name': 'CUDA',
- 'th_generator': 'THCGenerator * generator;',
+ 'th_generator': '',
'header': 'THC/THC.h'
},
}
| Pollution of GPU-0 memory
## Issue description
Copying a tensor to a GPU other than 0 uses up memory on GPU 0 too.
## Code example
```
import torch
a = torch.zeros(1).cuda(1)
```
nvidia-smi now shows:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 93271 C python 306MiB |
| 1 93271 C python 306MiB |
+-----------------------------------------------------------------------------+
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 8.0.61
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 8.0.61
GPU models and configuration:
GPU 0: Tesla K80
GPU 1: Tesla K80
GPU 2: Tesla K80
GPU 3: Tesla K80
Nvidia driver version: 367.48
cuDNN version: Probably one of the following:
/usr/local/cuda-8.0/lib64/libcudnn.so.5.1.10
/usr/local/cuda-8.0/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip3] numpy (1.14.3)
[pip3] torch (0.4.0)
[pip3] torchvision (0.2.1)
[conda] Could not collect
| I remember that we always create context on first observable GPU. Not sure if it is the correct behavior.
We really shouldn't, but that might be the case because we call some driver functions to do initialization, before we switch to the desired GPU. A fix might be complicated π
This:
```
import torch
a = torch.zeros(1).cuda(1)
```
works nicely in 0.3.1 (memory is taken only from the GPU 1)
The problem is the `THCRandom_getGenerator` call :
https://github.com/pytorch/pytorch/blob/71626491c40c86eb1ee410e695c467486084144c/aten/src/ATen/CUDAGenerator.cpp#L17-L23
via the call chain:
```
#1 0x00007fff622869eb in at::CUDAGenerator::CUDAGenerator(at::Context*) () from /data/users/sgross/pytorch/torch/lib/libATen.so
#2 0x00007fff62286d98 in at::Context::doInitCUDA() () from /data/users/sgross/pytorch/torch/lib/libATen.so
```
That call looks incorrect: the `THCGenerator*` is per-device but we have one global default `CUDAGenerator`. It's also not really used, since the actual THC bindings look like:
```
auto generator_ = check_generator<CUDAGenerator>(generator, &context->defaultGenerator(backend()));
(void) generator_; //silence unused warning
``` | 2018-05-08T21:33:26 |
|
pytorch/pytorch | 7,407 | pytorch__pytorch-7407 | [
"7274"
] | 8dbeffab07fc7ddff3b0088df8a145cf7518b27c | diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py
--- a/torch/nn/parallel/data_parallel.py
+++ b/torch/nn/parallel/data_parallel.py
@@ -31,6 +31,9 @@ class DataParallel(Module):
device_ids: CUDA devices (default: all devices)
output_device: device location of output (default: device_ids[0])
+ Attributes:
+ module (Module): the module to be parallelized
+
Example::
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
diff --git a/torch/nn/parallel/distributed.py b/torch/nn/parallel/distributed.py
--- a/torch/nn/parallel/distributed.py
+++ b/torch/nn/parallel/distributed.py
@@ -72,6 +72,9 @@ class DistributedDataParallel(Module):
device_ids: CUDA devices (default: all devices)
output_device: device location of output (default: device_ids[0])
+ Attributes:
+ module (Module): the module to be parallelized
+
Example::
>>> torch.distributed.init_process_group(world_size=4, init_method='...')
| [feature request] make DataParallel and DistributedDataParallel's module as public
After training the DataParallel or DistributedDataParallel's model, we may resume the trained model, however [resume the parallel model while not unpack it](https://github.com/pytorch/examples/blob/83f1b5c2667c820b672852d8a5b6971835e6406f/imagenet/main.py#L104) is not a good idea while DataParallel and DistributedDataParallel are just used to accelerate training via parallel. So we would like to unpack them first and save the raw model sometimes.
However, although the `module` is public in the source code, this is not explicit in the doc. If we only reference the explicit and public code, we have to unpack the model like `a = list(b.modules())[0]` while `b` is the paralleled model, it is ugly. So I advice that make DataParallel and DistributedDataParallel's module as public so that we can unpack it comfortably.
| what do you mean by "make it public", I am not sure I understand.
Tell everyone in the doc that `module` is a public member of `DataParallel` or `DistributedDataParallel` that we can use this API .
You can safely access `.module`, we will never change that. If you'd like this clarified, please send a PR with a doc fix!
OK, please wait for a few days because I donβt have a computer at hand. | 2018-05-09T03:09:46 |
|
pytorch/pytorch | 7,537 | pytorch__pytorch-7537 | [
"7464"
] | a3b2877810f503adbdd2c69ad9a5c3475fb4086a | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1684,8 +1684,8 @@ def callable(a, b) -> number
This is the reverse operation of the manner described in :meth:`~Tensor.gather`.
:attr:`self`, :attr:`index` and :attr:`src` should have same number of
-dimensions. It is also required that `index->size[d] <= src->size[d]` for all
-dimension `d`, and that `index->size[d] <= real->size[d]` for all dimensions
+dimensions. It is also required that `index.size(d) <= src.size(d)` for all
+dimensions `d`, and that `index.size(d) <= self.size(d)` for all dimensions
`d != dim`.
Moreover, as for :meth:`~Tensor.gather`, the values of :attr:`index` must be
diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py
--- a/torch/nn/modules/loss.py
+++ b/torch/nn/modules/loss.py
@@ -262,20 +262,22 @@ class KLDivLoss(_Loss):
and is often useful when performing direct regression over the space of
(discretely sampled) continuous output distributions.
- As with `NLLLoss`, the `input` given is expected to contain
- *log-probabilities*, however unlike `ClassNLLLoss`, `input` is not
+ As with :class:`~torch.nn.NLLLoss`, the `input` given is expected to contain
+ *log-probabilities*. However, unlike :class:`~torch.nn.NLLLoss`, `input` is not
restricted to a 2D Tensor, because the criterion is applied element-wise.
+ The targets are given as *probabilities* (i.e. without taking the logarithm).
This criterion expects a `target` `Tensor` of the same size as the
`input` `Tensor`.
- The loss can be described as:
+ The unreduced (i.e. with :attr:`reduce` set to ``False``) loss can be described as:
.. math::
- \ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad
- l_n = y_n \odot \left( \log y_n - x_n \right),
+ l(x,y) = L := \{ l_1,\dots,l_N \}, \quad
+ l_n = y_n \cdot \left( \log y_n - x_n \right),
- where :math:`N` is the batch size. If reduce is ``True``, then:
+ where the index :math:`N` spans all dimensions of ``input`` and :math:`L` has the same
+ shape as ``input``. If :attr:`reduce` is ``True`` (the default), then:
.. math::
\ell(x, y) = \begin{cases}
@@ -309,7 +311,7 @@ class KLDivLoss(_Loss):
Args:
- size_average (bool, optional: By default, the losses are averaged
+ size_average (bool, optional): By default, the losses are averaged
for each minibatch over observations **as well as** over
dimensions. However, if ``False`` the losses are instead summed.
reduce (bool, optional): By default, the losses are averaged
@@ -321,8 +323,8 @@ class KLDivLoss(_Loss):
- input: :math:`(N, *)` where `*` means, any number of additional
dimensions
- target: :math:`(N, *)`, same shape as the input
- - output: scalar. If `reduce` is ``True``, then :math:`(N, *)`,
- same shape as the input
+ - output: scalar by default. If `reduce` is ``False``, then :math:`(N, *)`,
+ the same shape as the input
"""
def __init__(self, size_average=True, reduce=True):
@@ -525,7 +527,7 @@ class HingeEmbeddingLoss(_Loss):
dissimilar, e.g. using the L1 pairwise distance as `x`, and is typically
used for learning nonlinear embeddings or semi-supervised learning::
- The loss function for :math:`n`-th sample in the mini-batch is:
+ The loss function for :math:`n`-th sample in the mini-batch is
.. math::
l_n = \begin{cases}
| kldivloss doc issues
- not clearly stated that `y` should not be log probabilities
- `l(x,y)` is defined both as `L` and as `mean(L)`
- not defined that `l_n`, `y_n` and `x_n` are each tensors, associated with a single data example
(I basically had to search through discuss.pytorch.org for examples, in order to use it). Actually, in fact, what I ended up doing was write my own kl-divergence implementation, and comparing the output with that of kldivloss, until the kldivloss one matched the self-coded one :P
| 2018-05-13T14:59:13 |
||
pytorch/pytorch | 7,538 | pytorch__pytorch-7538 | [
"7532"
] | 1ce5431aaf5318e3f707961337e17924515d02e3 | diff --git a/torch/distributions/uniform.py b/torch/distributions/uniform.py
--- a/torch/distributions/uniform.py
+++ b/torch/distributions/uniform.py
@@ -71,7 +71,7 @@ def cdf(self, value):
if self._validate_args:
self._validate_sample(value)
result = (value - self.low) / (self.high - self.low)
- return result
+ return result.clamp(min=0, max=1)
def icdf(self, value):
if self._validate_args:
| diff --git a/test/test_distributions.py b/test/test_distributions.py
--- a/test/test_distributions.py
+++ b/test/test_distributions.py
@@ -1126,6 +1126,10 @@ def test_uniform(self):
self.assertEqual(uniform.log_prob(above_high).item(), -float('inf'), allow_inf=True)
self.assertEqual(uniform.log_prob(below_low).item(), -float('inf'), allow_inf=True)
+ # check cdf computation when value outside range
+ self.assertEqual(uniform.cdf(below_low).item(), 0)
+ self.assertEqual(uniform.cdf(above_high).item(), 1)
+
set_rng_seed(1)
self._gradcheck_log_prob(Uniform, (low, high))
self._gradcheck_log_prob(Uniform, (low, 1.0))
| torch.distributions.uniform.Uniform.cdf() can return negative values or values above one
## Issue description
`torch.distributions.uniform.Uniform.cdf()` can return negative values or values above one because it does not do a range check like `log_prob()` in the same class does.
Relevant code in `cdf()`:
https://github.com/pytorch/pytorch/blob/825c3ca2d6deb505b39b6b988d28f28a7bd15f4d/torch/distributions/uniform.py#L70-L74
Range checking in `log_prob()`:
https://github.com/pytorch/pytorch/blob/825c3ca2d6deb505b39b6b988d28f28a7bd15f4d/torch/distributions/uniform.py#L63-L68
I can provide a pull request based on the range checking as done in the `log_prob()` method of the same class (although `log_prob()` throws an exception with scalar arguments because of these range checks while `cdf()` accepts scalar arguments).
## Code example
```
import torch
from torch.distributions.uniform import Uniform
uniform = Uniform(1,2)
print uniform.cdf(0), uniform.cdf(5)
```
will give
```
tensor(-1.) tensor(4.)
```
## System Info
```
Collecting environment information...
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: Could not collect
OS: Mac OSX 10.13.4
GCC version: Could not collect
CMake version: version 3.11.0
Python version: 2.7
Is CUDA available: No
CUDA runtime version: 9.1.128
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
Versions of relevant libraries:
[pip] numpy (1.14.2)
[pip] root-numpy (4.4.0)
[pip] torch (0.4.0)
[pip] torchvision (0.2.1)
[conda] Could not collect
```
| We should just clamp the cdf to the unit interval. Can you please send a PR? cc @fritzo | 2018-05-13T20:21:56 |
pytorch/pytorch | 7,632 | pytorch__pytorch-7632 | [
"7213",
"6768"
] | 4485ce66c293935d36745b9b11d8927aba16d64d | diff --git a/torch/_tensor_str.py b/torch/_tensor_str.py
--- a/torch/_tensor_str.py
+++ b/torch/_tensor_str.py
@@ -12,7 +12,6 @@ class __PrinterOptions(object):
PRINT_OPTS = __PrinterOptions()
-SCALE_FORMAT = '{:.5e} *\n'
# We could use **kwargs, but this will give better docs
@@ -65,132 +64,131 @@ def set_printoptions(
PRINT_OPTS.linewidth = linewidth
-def _get_min_log_scale():
- min_positive = float_info.min * float_info.epsilon # get smallest denormal
- if min_positive == 0: # use smallest normal if DAZ/FTZ is set
- min_positive = float_info.min
- return math.ceil(math.log(min_positive, 10))
+class _Formatter(object):
+ def __init__(self, tensor):
+ self.floating_dtype = tensor.dtype.is_floating_point
+ self.int_mode = True
+ self.sci_mode = False
+ self.max_width = 1
+ if not self.floating_dtype:
+ copy = torch.empty(tensor.size(), dtype=torch.long).copy_(tensor).view(tensor.nelement())
+ for value in copy.tolist():
+ value_str = '{}'.format(value)
+ self.max_width = max(self.max_width, len(value_str))
-def _get_format_fn(format, nonfinite_format):
- return lambda x: format.format(x) if math.isinf(x) or math.isnan(x) else nonfinite_format.format(x)
-
-
-def _number_format(tensor, min_sz=-1):
- floating_dtype = tensor.dtype.is_floating_point # save this because we cast later
- _min_log_scale = _get_min_log_scale()
- min_sz = max(min_sz, 2)
- tensor = torch.DoubleTensor(tensor.size()).copy_(tensor).abs_().view(tensor.nelement())
-
- pos_inf_mask = tensor.eq(float('inf'))
- neg_inf_mask = tensor.eq(float('-inf'))
- nan_mask = tensor.ne(tensor)
- invalid_value_mask = pos_inf_mask + neg_inf_mask + nan_mask
- if invalid_value_mask.all():
- example_value = 0
- else:
- example_value = tensor[invalid_value_mask.eq(0)][0]
- tensor[invalid_value_mask] = example_value
- if invalid_value_mask.any():
- min_sz = max(min_sz, 3)
-
- int_mode = True
- # TODO: use fmod?
- for value in tensor.tolist():
- if value != math.ceil(value):
- int_mode = False
- break
-
- exp_min = tensor.min()
- if exp_min != 0:
- exp_min = math.floor(math.log10(exp_min)) + 1
- else:
- exp_min = 1
- exp_max = tensor.max()
- if exp_max != 0:
- exp_max = math.floor(math.log10(exp_max)) + 1
- else:
- exp_max = 1
- include_decimal_int_mode = floating_dtype and int_mode
-
- scale = 1
- exp_max = int(exp_max)
- prec = PRINT_OPTS.precision
- if int_mode:
- if exp_max > prec + 1:
- format = '{{:11.{}e}}'.format(prec)
- fmt_fn = format.format
- sz = max(min_sz, 7 + prec)
- else:
- sz = max(min_sz, exp_max + 1 + include_decimal_int_mode)
- format = '{:' + str(sz) + '.0f}'
- fmt_fn = format.format
- if include_decimal_int_mode:
- format = '{:' + str(sz - 1) + '.0f}'
- nonfinite_format = format + '.'
- fmt_fn = _get_format_fn(format, nonfinite_format)
- else:
- if exp_max - exp_min > prec:
- sz = 7 + prec
- if abs(exp_max) > 99 or abs(exp_min) > 99:
- sz = sz + 1
- sz = max(min_sz, sz)
- format = '{{:{}.{}e}}'.format(sz, prec)
- fmt_fn = format.format
else:
- if exp_max > prec + 1 or exp_max < 0:
- sz = max(min_sz, 7)
- scale = math.pow(10, max(exp_max - 1, _min_log_scale))
+ copy = torch.empty(tensor.size(), dtype=torch.float64).copy_(tensor).view(tensor.nelement())
+ copy_list = copy.tolist()
+ try:
+ for value in copy_list:
+ if value != math.ceil(value):
+ self.int_mode = False
+ break
+ # nonfinites will throw errors
+ except (ValueError, OverflowError):
+ self.int_mode = False
+
+ if self.int_mode:
+ for value in copy_list:
+ value_str = '{:.0f}'.format(value)
+ if math.isnan(value) or math.isinf(value):
+ self.max_width = max(self.max_width, len(value_str))
+ else:
+ # in int_mode for floats, all numbers are integers, and we append a decimal to nonfinites
+ # to indicate that the tensor is of floating type. add 1 to the len to account for this.
+ self.max_width = max(self.max_width, len(value_str) + 1)
+
else:
- if exp_max == 0:
- sz = 7
+ copy_abs = copy.abs()
+ pos_inf_mask = copy_abs.eq(float('inf'))
+ neg_inf_mask = copy_abs.eq(float('-inf'))
+ nan_mask = copy_abs.ne(copy)
+ invalid_value_mask = pos_inf_mask + neg_inf_mask + nan_mask
+ if invalid_value_mask.all():
+ example_value = 0
+ else:
+ example_value = copy_abs[invalid_value_mask.eq(0)][0]
+ copy_abs[invalid_value_mask] = example_value
+
+ exp_min = copy_abs.min()
+ if exp_min != 0:
+ exp_min = math.floor(math.log10(exp_min)) + 1
+ else:
+ exp_min = 1
+ exp_max = copy_abs.max()
+ if exp_max != 0:
+ exp_max = math.floor(math.log10(exp_max)) + 1
else:
- sz = exp_max + 6
- sz = max(min_sz, sz)
- format = '{{:{}.{}f}}'.format(sz, prec)
- fmt_fn = format.format
- return fmt_fn, scale, sz
+ exp_max = 1
+
+ # these conditions for using scientific notation are based on numpy
+ if exp_max - exp_min > PRINT_OPTS.precision or exp_max > 8 or exp_min < -4:
+ self.sci_mode = True
+ for value in copy_list:
+ value_str = ('{{:.{}e}}').format(PRINT_OPTS.precision).format(value)
+ self.max_width = max(self.max_width, len(value_str))
+ else:
+ for value in copy_list:
+ value_str = ('{{:.{}f}}').format(PRINT_OPTS.precision).format(value)
+ self.max_width = max(self.max_width, len(value_str))
+
+ def width(self):
+ return self.max_width
+
+ def format(self, value):
+ if self.floating_dtype:
+ if self.int_mode:
+ ret = '{:.0f}'.format(value)
+ if not (math.isinf(value) or math.isnan(value)):
+ ret += '.'
+ elif self.sci_mode:
+ ret = ('{{:{}.{}e}}').format(self.max_width, PRINT_OPTS.precision).format(value)
+ else:
+ ret = ('{{:.{}f}}').format(PRINT_OPTS.precision).format(value)
+ else:
+ ret = '{}'.format(value)
+ return (self.max_width - len(ret)) * ' ' + ret
-def _scalar_str(self, fmt, scale):
- scalar_str = fmt(self.item() / scale)
- # The leading space for positives is ugly on scalars, so we strip it
- return scalar_str.lstrip()
+def _scalar_str(self, formatter):
+ return formatter.format(self.item())
-def _vector_str(self, indent, fmt, scale, sz, summarize):
- element_length = sz + 3
- elements_per_line = int(math.floor((PRINT_OPTS.linewidth - indent) / (element_length)))
+def _vector_str(self, indent, formatter, summarize):
+ # length includes spaces and comma between elements
+ element_length = formatter.width() + 2
+ elements_per_line = max(1, int(math.floor((PRINT_OPTS.linewidth - indent) / (element_length))))
char_per_line = element_length * elements_per_line
if summarize and self.size(0) > 2 * PRINT_OPTS.edgeitems:
- data = ([fmt(val / scale) for val in self[:PRINT_OPTS.edgeitems].tolist()] +
+ data = ([formatter.format(val) for val in self[:PRINT_OPTS.edgeitems].tolist()] +
[' ...'] +
- [fmt(val / scale) for val in self[-PRINT_OPTS.edgeitems:].tolist()])
+ [formatter.format(val) for val in self[-PRINT_OPTS.edgeitems:].tolist()])
else:
- data = [fmt(val / scale) for val in self.tolist()]
+ data = [formatter.format(val) for val in self.tolist()]
data_lines = [data[i:i + elements_per_line] for i in range(0, len(data), elements_per_line)]
lines = [', '.join(line) for line in data_lines]
return '[' + (',' + '\n' + ' ' * (indent + 1)).join(lines) + ']'
-def _tensor_str(self, indent, fmt, scale, sz, summarize):
+def _tensor_str(self, indent, formatter, summarize):
dim = self.dim()
if dim == 0:
- return _scalar_str(self, fmt, scale)
+ return _scalar_str(self, formatter)
if dim == 1:
- return _vector_str(self, indent, fmt, scale, sz, summarize)
+ return _vector_str(self, indent, formatter, summarize)
if summarize and self.size(0) > 2 * PRINT_OPTS.edgeitems:
- slices = ([_tensor_str(self[i], indent + 1, fmt, scale, sz, summarize)
+ slices = ([_tensor_str(self[i], indent + 1, formatter, summarize)
for i in range(0, PRINT_OPTS.edgeitems)] +
['...'] +
- [_tensor_str(self[i], indent + 1, fmt, scale, sz, summarize)
+ [_tensor_str(self[i], indent + 1, formatter, summarize)
for i in range(len(self) - PRINT_OPTS.edgeitems, len(self))])
else:
- slices = [_tensor_str(self[i], indent + 1, fmt, scale, sz, summarize) for i in range(0, self.size(0))]
+ slices = [_tensor_str(self[i], indent + 1, formatter, summarize) for i in range(0, self.size(0))]
tensor_str = (',' + '\n' * (dim - 1) + ' ' * (indent + 1)).join(slices)
return '[' + tensor_str + ']'
@@ -250,10 +248,8 @@ def _str(self):
if self.dtype != torch.get_default_dtype() and self.dtype != torch.int64:
suffix += ', dtype=' + str(self.dtype)
- fmt, scale, sz = _number_format(get_summarized_data(self) if summarize else self)
- if scale != 1:
- prefix = prefix + SCALE_FORMAT.format(scale) + ' ' * indent
- tensor_str = _tensor_str(self, indent, fmt, scale, sz, summarize)
+ formatter = _Formatter(get_summarized_data(self) if summarize else self)
+ tensor_str = _tensor_str(self, indent, formatter, summarize)
if self.grad_fn is not None:
suffix += ', grad_fn=<{}>'.format(type(self.grad_fn).__name__)
| diff --git a/test/expect/TestTorch.test_print-bigint.expect b/test/expect/TestTorch.test_print-bigint.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-bigint.expect
@@ -0,0 +1 @@
+tensor(2341234123412341)
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-default_device.expect b/test/expect/TestTorch.test_print-default_device.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-default_device.expect
@@ -0,0 +1 @@
+tensor([123])
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-default_dtype.expect b/test/expect/TestTorch.test_print-default_dtype.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-default_dtype.expect
@@ -0,0 +1,2 @@
+tensor([ 0.0000e+00, 9.8813e-324, 9.8813e-323, 1.0000e+307, 1.0000e+308,
+ inf])
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-device.expect b/test/expect/TestTorch.test_print-device.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-device.expect
@@ -0,0 +1 @@
+tensor([123], device='cuda:0')
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-dtype.expect b/test/expect/TestTorch.test_print-dtype.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-dtype.expect
@@ -0,0 +1,2 @@
+tensor([ 0.0000e+00, 9.8813e-324, 9.8813e-323, 1.0000e+307, 1.0000e+308,
+ inf], dtype=torch.float64)
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-negint.expect b/test/expect/TestTorch.test_print-negint.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-negint.expect
@@ -0,0 +1 @@
+tensor([ 1, -2])
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-nonfinite.expect b/test/expect/TestTorch.test_print-nonfinite.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-nonfinite.expect
@@ -0,0 +1 @@
+tensor([4.0000, inf, 1.5000, -inf, 0.0000, nan, 1.0000])
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-posint.expect b/test/expect/TestTorch.test_print-posint.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-posint.expect
@@ -0,0 +1 @@
+tensor([1, 2])
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-requires_grad.expect b/test/expect/TestTorch.test_print-requires_grad.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-requires_grad.expect
@@ -0,0 +1 @@
+tensor([123.], requires_grad=True)
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-scimode.expect b/test/expect/TestTorch.test_print-scimode.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-scimode.expect
@@ -0,0 +1 @@
+tensor([1.0000e+28, 1.0000e-28])
\ No newline at end of file
diff --git a/test/expect/TestTorch.test_print-summary.expect b/test/expect/TestTorch.test_print-summary.expect
new file mode 100644
--- /dev/null
+++ b/test/expect/TestTorch.test_print-summary.expect
@@ -0,0 +1 @@
+tensor([0., 0., 0., ..., 0., 0., 0.])
\ No newline at end of file
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -6659,6 +6659,7 @@ def test_from_file(self):
self.assertEqual(t1, t2, 0)
def test_print(self):
+ default_type = torch.Tensor().type()
for t in torch._tensor_classes:
if t == torch.HalfTensor:
continue # HalfTensor does not support fill
@@ -6676,13 +6677,63 @@ def test_print(self):
obj.__repr__()
str(obj)
- x = torch.Tensor([4, float('inf'), 1.5, float('-inf'), 0, float('nan'), 1])
- x.__repr__()
- str(x)
+ # test big integer
+ x = torch.tensor(2341234123412341)
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='bigint')
+
+ # test scientific notation
+ x = torch.tensor([1e28, 1e-28])
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='scimode')
+
+ # test no leading space if all elements positive
+ x = torch.tensor([1, 2])
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='posint')
+
+ # test for leading space if there are negative elements
+ x = torch.tensor([1, -2])
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='negint')
+
+ # test inf and nan
+ x = torch.tensor([4, float('inf'), 1.5, float('-inf'), 0, float('nan'), 1])
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='nonfinite')
+
+ # test dtype
+ torch.set_default_dtype(torch.float)
+ x = torch.tensor([1e-324, 1e-323, 1e-322, 1e307, 1e308, 1e309], dtype=torch.float64)
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='dtype')
+
+ # test changing default dtype
+ torch.set_default_dtype(torch.float64)
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='default_dtype')
+
+ # test summary
+ x = torch.zeros(10000)
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='summary')
+
+ # test device
+ if torch.cuda.is_available():
+ x = torch.tensor([123], device='cuda:0')
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='device')
+
+ # test changing default to cuda
+ torch.set_default_tensor_type(torch.cuda.FloatTensor)
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='default_device')
+ torch.set_default_tensor_type(default_type)
- x = torch.DoubleTensor([1e-324, 1e-323, 1e-322, 1e307, 1e308, 1e309])
- x.__repr__()
- str(x),
+ # test integral floats and requires_grad
+ x = torch.tensor([123.], requires_grad=True)
+ self.assertEqual(x.__repr__(), str(x))
+ self.assertExpected(str(x), subname='requires_grad')
def test_sizeof(self):
sizeof_empty = torch.randn(0).storage().__sizeof__()
| Get rid of scaling factor in print
Suggested in #7175. Creating new issue to track.
Tensor printing should display requires_grad
I think one or more of the following would be nice:
- tensor printing displays requires_grad
- tensor printing displays "if this tensor is a leaf node" (but find a better name for this). Most of the times when I want to know if my tensor requires_grad I want to know if it gradients will accumulate in this.
cc @li-roy @gchanan @SsnL
| 2018-05-17T00:44:17 |
|
pytorch/pytorch | 7,652 | pytorch__pytorch-7652 | [
"7627"
] | ec42a1141083f1266c079756c96df287c965b18e | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1407,6 +1407,24 @@ def callable(a, b) -> number
See :func:`torch.ormqr`
""")
+
+add_docstr_all('permute',
+ r"""
+permute(*dims) -> Tensor
+
+Permute the dimensions of this tensor.
+
+Args:
+ *dims (int...): The desired ordering of dimensions
+
+Example:
+ >>> x = torch.randn(2, 3, 5)
+ >>> x.size()
+ torch.Size([2, 3, 5])
+ >>> x.permute(2, 0, 1).size()
+ torch.Size([5, 2, 3])
+""")
+
add_docstr_all('potrf',
r"""
potrf(upper=True) -> Tensor
| [docs] Tensor.permute docs missing
https://pytorch.org/docs/master/tensors.html?highlight=permute#torch.Tensor.permute
| I also found this issue when I tried to swap axes of a tensor. I googled an older doc [here](http://pytorch-zh.readthedocs.io/en/latest/_modules/torch/tensor.html) in the source code. Hope it helps. | 2018-05-17T19:06:06 |
|
pytorch/pytorch | 7,654 | pytorch__pytorch-7654 | [
"7039"
] | ec42a1141083f1266c079756c96df287c965b18e | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -599,14 +599,14 @@ def add_docstr_all(method, docstr):
add_docstr_all('cumprod',
r"""
-cumprod(dim) -> Tensor
+cumprod(dim, dtype=None) -> Tensor
See :func:`torch.cumprod`
""")
add_docstr_all('cumsum',
r"""
-cumsum(dim) -> Tensor
+cumsum(dim, dtype=None) -> Tensor
See :func:`torch.cumsum`
""")
@@ -1444,7 +1444,7 @@ def callable(a, b) -> number
add_docstr_all('prod',
r"""
-prod(dim=None, keepdim=False) -> Tensor
+prod(dim=None, keepdim=False, dtype=None) -> Tensor
See :func:`torch.prod`
""")
@@ -1943,7 +1943,7 @@ def callable(a, b) -> number
add_docstr_all('sum',
r"""
-sum(dim=None, keepdim=False) -> Tensor
+sum(dim=None, keepdim=False, dtype=None) -> Tensor
See :func:`torch.sum`
""")
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -20,6 +20,12 @@ def parse_kwargs(desc):
return {desc.split(' ')[0]: desc for desc in kwargs}
+reduceops_common_args = parse_kwargs("""
+ dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
+ If specified, the input tensor is casted to :attr:`dtype` before the operation
+ is performed. This is useful for preventing data type overflows. Default: None.
+""")
+
factory_common_args = parse_kwargs("""
out (Tensor, optional): the output tensor
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
@@ -902,7 +908,7 @@ def parse_kwargs(desc):
add_docstr(torch.cumprod,
r"""
-cumprod(input, dim, out=None) -> Tensor
+cumprod(input, dim, dtype=None) -> Tensor
Returns the cumulative product of elements of :attr:`input` in the dimension
:attr:`dim`.
@@ -916,7 +922,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the input tensor
dim (int): the dimension to do the operation over
- out (Tensor, optional): the output tensor
+ {dtype}
Example::
@@ -932,7 +938,7 @@ def parse_kwargs(desc):
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,
0.0000, -0.0000, -0.0000])
-""")
+""".format(**reduceops_common_args))
add_docstr(torch.cumsum,
r"""
@@ -950,7 +956,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the input tensor
dim (int): the dimension to do the operation over
- out (Tensor, optional): the output tensor
+ {dtype}
Example::
@@ -961,7 +967,7 @@ def parse_kwargs(desc):
>>> torch.cumsum(a, dim=0)
tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058,
-1.8209, -2.9780, -3.4022])
-""")
+""".format(**reduceops_common_args))
add_docstr(torch.diag,
r"""
@@ -3289,12 +3295,13 @@ def parse_kwargs(desc):
add_docstr(torch.prod,
r"""
-.. function:: prod(input) -> Tensor
+.. function:: prod(input, dtype=None) -> Tensor
Returns the product of all elements in the :attr:`input` tensor.
Args:
input (Tensor): the input tensor
+ {dtype}
Example::
@@ -3304,7 +3311,7 @@ def parse_kwargs(desc):
>>> torch.prod(a)
tensor(0.6902)
-.. function:: prod(input, dim, keepdim=False, out=None) -> Tensor
+.. function:: prod(input, dim, keepdim=False, dtype=None) -> Tensor
Returns the product of each row of the :attr:`input` tensor in the given
dimension :attr:`dim`.
@@ -3318,7 +3325,7 @@ def parse_kwargs(desc):
input (Tensor): the input tensor
dim (int): the dimension to reduce
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
- out (Tensor, optional): the output tensor
+ {dtype}
Example::
@@ -3330,7 +3337,7 @@ def parse_kwargs(desc):
[ 1.1131, -1.0629]])
>>> torch.prod(a, 1)
tensor([-0.2018, -0.2962, -0.0821, -1.1831])
-""")
+""".format(**reduceops_common_args))
add_docstr(torch.pstrf, r"""
pstrf(a, upper=True, out=None) -> (Tensor, Tensor)
@@ -4131,12 +4138,13 @@ def parse_kwargs(desc):
add_docstr(torch.sum,
r"""
-.. function:: sum(input) -> Tensor
+.. function:: sum(input, dtype=None) -> Tensor
Returns the sum of all elements in the :attr:`input` tensor.
Args:
input (Tensor): the input tensor
+ {dtype}
Example::
@@ -4146,7 +4154,7 @@ def parse_kwargs(desc):
>>> torch.sum(a)
tensor(-0.5475)
-.. function:: sum(input, dim, keepdim=False, out=None) -> Tensor
+.. function:: sum(input, dim, keepdim=False, dtype=None) -> Tensor
Returns the sum of each row of the :attr:`input` tensor in the given
dimension :attr:`dim`. If :attr::`dim` is a list of dimensions,
@@ -4161,7 +4169,7 @@ def parse_kwargs(desc):
input (Tensor): the input tensor
dim (int or tuple of ints): the dimension or dimensions to reduce
keepdim (bool): whether the output tensor has :attr:`dim` retained or not
- out (Tensor, optional): the output tensor
+ {dtype}
Example::
@@ -4176,7 +4184,7 @@ def parse_kwargs(desc):
>>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
>>> torch.sum(b, (2, 1))
tensor([ 435., 1335., 2235., 3135.])
-""")
+""".format(**reduceops_common_args))
add_docstr(torch.svd,
r"""
| [docs] reduce functions take dtype args
This behavior is missing from the documentation.
| 2018-05-17T20:00:37 |
||
pytorch/pytorch | 7,657 | pytorch__pytorch-7657 | [
"7650"
] | f2295494afb1436c96eba8b904175f02c8c59488 | diff --git a/torch/onnx/symbolic.py b/torch/onnx/symbolic.py
--- a/torch/onnx/symbolic.py
+++ b/torch/onnx/symbolic.py
@@ -631,7 +631,7 @@ def abs(g, self):
def pow(g, self, exponent):
- return g.op("Pow", self, exponent)
+ return g.op("Pow", self, _if_scalar_type_as(exponent, self), **_broadcast_if_scalar(exponent))
def clamp(g, self, min, max):
| Errors exporting model from PyTorch to Caffe2
## System Info
PyTorch 0.4.0 (installed with conda)
Caffe2 0.8.dev (installed with conda)
onnx-caffe2 1.0.0 (installed with pip)
macOS 10.13.4
Python 3.6.5
no CUDA
## Issue description
I'm trying to build a model with PyTorch, export it as a Caffe2 model, then use it in a C++ program. I'm pretty sure the C++ code is correct. At any rate, it works correctly if I use a model built directly with Caffe2. But I run into various problems when using a PyTorch model. Here is the code I use to generate it:
```python
import torch
import torch.nn as nn
class Compute(nn.Module):
def forward(self, x):
return torch.sum(x**2)
x = torch.rand(10, 3)
torch.onnx.export(Compute(), x, "test.onnx", verbose=True, input_names=['positions'], output_names=['energy'])
```
I convert it to a Caffe2 model with `convert-onnx-to-caffe2`, then try to execute it in my C++ program. It fails with this error:
```
exception: [enforce fail at tensor.h:495] IsType<T>(). Tensor type mismatch, caller expects elements to be float while tensor contains long long Error from operator:
input: "positions" input: "1" output: "2" name: "" type: "Pow" device_option { device_type: 0 cuda_gpu_id: 0 }
** while accessing input: 1
```
The problem appears to be the operation `x**2`. PyTorch is recording the exponent as being a `long long`, but Caffe2 insists it must be a `float`.
As a temporary workaround, I tried eliminating the power operation by changing the line to `return torch.sum(x*x)`. With that change I can run the model, but when I query the "energy" output, it's wrong. It ought to be a scalar containing the sum of squares of all the input elements. Instead, it comes out as a (10, 3) matrix containing the square of each element. That is, the sum operation is never getting run.
| 2018-05-17T21:50:19 |
||
pytorch/pytorch | 7,669 | pytorch__pytorch-7669 | [
"3587"
] | 06fa332e2b285b1ac82df9200a83c22821812ab9 | diff --git a/torch/nn/modules/rnn.py b/torch/nn/modules/rnn.py
--- a/torch/nn/modules/rnn.py
+++ b/torch/nn/modules/rnn.py
@@ -278,13 +278,21 @@ class RNN(RNNBase):
Defaults to zero if not provided.
Outputs: output, h_n
- - **output** of shape `(seq_len, batch, hidden_size * num_directions)`: tensor
+ - **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor
containing the output features (`h_k`) from the last layer of the RNN,
for each `k`. If a :class:`torch.nn.utils.rnn.PackedSequence` has
been given as the input, the output will also be a packed sequence.
+
+ For the unpacked case, the directions can be separated
+ using ``output.view(seq_len, batch, num_directions, hidden_size)``,
+ with forward and backward being direction `0` and `1` respectively.
+ Similarly, the directions can be separated in the packed case.
- **h_n** (num_layers * num_directions, batch, hidden_size): tensor
containing the hidden state for `k = seq_len`.
+ Like *output*, the layers can be separated using
+ ``h_n.view(num_layers, num_directions, batch, hidden_size)``.
+
Attributes:
weight_ih_l[k]: the learnable input-hidden weights of the k-th layer,
of shape `(hidden_size * input_size)` for `k = 0`. Otherwise, the shape is
@@ -377,12 +385,20 @@ class LSTM(RNNBase):
Outputs: output, (h_n, c_n)
- - **output** of shape `(seq_len, batch, hidden_size * num_directions)`: tensor
+ - **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor
containing the output features `(h_t)` from the last layer of the LSTM,
for each t. If a :class:`torch.nn.utils.rnn.PackedSequence` has been
given as the input, the output will also be a packed sequence.
+
+ For the unpacked case, the directions can be separated
+ using ``output.view(seq_len, batch, num_directions, hidden_size)``,
+ with forward and backward being direction `0` and `1` respectively.
+ Similarly, the directions can be separated in the packed case.
- **h_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
- containing the hidden state for `t = seq_len`
+ containing the hidden state for `t = seq_len`.
+
+ Like *output*, the layers can be separated using
+ ``h_n.view(num_layers, num_directions, batch, hidden_size)`` and similarly for *c_n*.
- **c_n** (num_layers * num_directions, batch, hidden_size): tensor
containing the cell state for `t = seq_len`
@@ -457,13 +473,21 @@ class GRU(RNNBase):
Defaults to zero if not provided.
Outputs: output, h_n
- - **output** of shape `(seq_len, batch, hidden_size * num_directions)`: tensor
+ - **output** of shape `(seq_len, batch, num_directions * hidden_size)`: tensor
containing the output features h_t from the last layer of the GRU,
for each t. If a :class:`torch.nn.utils.rnn.PackedSequence` has been
given as the input, the output will also be a packed sequence.
+ For the unpacked case, the directions can be separated
+ using ``output.view(seq_len, batch, num_directions, hidden_size)``,
+ with forward and backward being direction `0` and `1` respectively.
+
+ Similarly, the directions can be separated in the packed case.
- **h_n** of shape `(num_layers * num_directions, batch, hidden_size)`: tensor
containing the hidden state for `t = seq_len`
+ Like *output*, the layers can be separated using
+ ``h_n.view(num_layers, num_directions, batch, hidden_size)``.
+
Attributes:
weight_ih_l[k] : the learnable input-hidden weights of the :math:`\text{k}^{th}` layer
(W_ir|W_iz|W_in), of shape `(3*hidden_size x input_size)`
| Documentation: Indexing output from bidirectional RNN (GRU,LSTM)
The documentation for RNNs (including GRU and LSTM) states the dimensionality of hidden state (num_layers * num_directions, batch, hidden_size) and output (seq_len, batch, hidden_size * num_direction), but I cannot figure out how to index the output to get separate vectors for the two directions. Specifically, I'd like to get the output from the last cell on the forward RNN, and the output from the first cell in the backward RNN.
From looking at the source code, you can infer that direction=0 is forward, and direction=1 is backward. I suppose that the "hidden_size * num_direction" dimension contains one forward-vector concatenated with one backward-vector, but I cannot find this anywhere in the documentation. Is this correct, and could this be specified in the documentation?
| From what I understand of the CuDNN API, which is the basis of pytorch's one, the output is sorted by timesteps, so `h_n` should be the concatenation of the hidden state of the forward layer for the last item of the sequence and of the hidden state of the backward layer for the first item of the sequence.
I agree that it should be more explicit. Also it is really inconvenient for e.g. bidirectional attention, but I guess there is no way around it if you want to be able to use CuDNN calls.
Hi, thank you. I think this is the most reasonable behavior. I just think it should be clarified in the docs. I think it sounds like what you'd like also for attention. Why would this be inconvenient?
Because it makes this topology

impossible to get with a bidirectional LSTM. To get per-word (or token, or whatever) hidden states instead of per-timestep, you have to run forward and backward as separate layers and concatenate the outputs afterwards. Add to that that pytorch (as far as I know) supports neither backward-only LSTM nor flipping tensors, this adds some complexity to my code, and I suspect is not so good for performance either.
From what I've understand so far about non `Cell` suffixed CuDNN variants:
1. You need to give them a tensor where the sequences are sorted by length in a decreasing order. If the sequences are guaranteed to always have the same length, you can skip this step. Otherwise, cf. `pack_padded_sequence()`
2. After calling the RNN, you receive a tuple of 2 items: **packed** `Variable` of all hidden states (`hs`) and a normal `Variable` containing the last hidden states (`ht`).
3. For `hs` you unpack it using `pad_packed_sequence()` to get a normal `Variable`.
4. `ht` contains the **correct** forward and backward states for each sequence so you don't have to do something to recover or mask out 0's etc **but** this tensor does not concatenate the forward-backward states although `hs` returns them in a concatenated fashion.
```python
# An input of 5 timesteps and 2 sequences. The shorter one is 0-padded.
# hidden_dim = 3, bidirectional=True, num_layers=1
In [525]: input_
Out[525]:
1 3
3 5
3 2
2 0
1 0
[torch.LongTensor of size 5x2]
# hs and ht are the return values of GRU here (for LSTM you'll also have c_t)
In [526]: print(hs[:, 1], ht[:, 1])
Variable containing:
(( forward states )) (( backward states ))
-0.0982 0.0275 -0.3005 0.3609 -0.4958 0.3408
-0.1710 -0.0576 -0.3759 0.2550 -0.3478 0.2796
-0.1935 0.0484 -0.4111 0.2088 -0.2813 0.1440
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
[torch.FloatTensor of size 5x6]
Variable containing:
-0.1935 0.0484 -0.4111
0.3609 -0.4958 0.3408
[torch.FloatTensor of size 2x3]
```
Here you can see that the last state for the forward sequence (3->5->2) is the third row's first 3 elements `-0.1935 0.0484 -0.4111` that you also find in the `ht` variable in the first row.
The last state for the backward sequence (2->5->3) is the first row's second part `0.3609 -0.4958 0.3408` that you also find in the `ht`variable in the second row.
So if you want to apply attention the first tensor is the one that you'll need. If you want to just take the last states, second tensor is at your help.
But if `num_layers > 1`, the second tensor becomes a mess :) Overall, I think this part of the PyTorch API really needs more intuitive handling.
```
ipt = Variable(torch.from_numpy(np.asarray([0., 0, 1, 1.]).reshape(-1, 1, 1).astype(np.float32)))
h0 = Variable(torch.zeros(2, 1,1 ))
rnn = nn.RNN(1,1, 1, bidirectional=True, bias=False, nonlinearity='relu')
for k, v in rnn.named_parameters():
setattr(rnn, k, torch.nn.Parameter(torch.ones_like(v.data)))
opt, hn = rnn(ipt, h0)
print list(rnn.named_parameters())
print opt
print ipt
```
the output I got:
```
[('weight_ih_l0', Parameter containing:
1
[torch.FloatTensor of size 1x1]
), ('weight_hh_l0', Parameter containing:
1
[torch.FloatTensor of size 1x1]
), ('weight_ih_l0_reverse', Parameter containing:
1
[torch.FloatTensor of size 1x1]
), ('weight_hh_l0_reverse', Parameter containing:
1
[torch.FloatTensor of size 1x1]
)]
Variable containing:
(0 ,.,.) =
0 2
(1 ,.,.) =
0 2
(2 ,.,.) =
1 2
(3 ,.,.) =
2 1
[torch.FloatTensor of size 4x1x2]
Variable containing:
(0 ,.,.) =
0
(1 ,.,.) =
0
(2 ,.,.) =
1
(3 ,.,.) =
1
[torch.FloatTensor of size 4x1x1]
```
to be more intuitive, I arrange the result:
```
--forward 0 -> 0 -> 1 -> 2
backward 2 <- 2 <- 2 <- 1
----input 0 -- 0 -- 1 -- 1
wordindex 0 -- 1 -- 2 -- 3
hn: [forward 2, backward 2]
```
After this experiment:
1. forward and backward are computed independently
2. after running through f/b ward, it returns something like `torch.cat([forward, backward], dim=-1)`
3. the hidden it return is actually hidden of word 3 in forward pass, and hidden of word 0 in backward pass.
4. therefore it is not a good idea to use rnn cell and run a for loop by yourself if you set bidirectional=True.
I think these behavior should be clarify somewhere in the official doc, which unfortunately I did not find yet.
@apaszke can you consider about adding it? Or this is something beyond pytorch but go depth to CuDNN?
> Or this is something beyond pytorch but go depth to CuDNN?
That is what I understand from the CuDNN API reference (which is not that clear eitherβ¦)
Is it really true that "`h_n` should be the concatenation of the hidden state of the forward layer for the last item of the sequence and of the hidden state of the backward layer for the first item of the sequence"? There's an issue in cudnn for torch (soumith/cudnn.torch#357) that says otherwise. @ngimel mentioned there that for an example sequence [x_1, x_2, x_3, x_4, x_5], the output sequence would be:
```
[F(x_1), B(x_1)]
[F(x_2), B(x_2)]
[F(x_3), B(x_3)]
[F(x_4), B(x_4)]
[F(x_5), B(x_5)]
```
The RNN code calling cuDNN in [torch](https://github.com/soumith/cudnn.torch/blob/master/RNN.lua) and [pytorch](https://github.com/pytorch/pytorch/blob/master/torch/backends/cudnn/rnn.py) do not seem to perform any reversing of the output of `cudnnRNNForwardTraining` call so its weird that the order would be different in pytorch and torch.
@sidharthms
lstm return two things `output, hn = lstm([x_1, x_2, x_3, x_4, x_5])`
what you point out is `output`
```
[F(x_1), B(x_1)]
[F(x_2), B(x_2)]
[F(x_3), B(x_3)]
[F(x_4), B(x_4)]
[F(x_5), B(x_5)]
```
which is the same as I said,
As regard to `hn`, it is `[F(x_5), B(x_1)]` which is the last thing computed in forward pass and the last thing computed in backward pass. | 2018-05-18T10:14:00 |
|
pytorch/pytorch | 7,708 | pytorch__pytorch-7708 | [
"7705"
] | 42e5e127506d006bee2a881f00ff9f84c906a124 | diff --git a/torch/distributions/utils.py b/torch/distributions/utils.py
--- a/torch/distributions/utils.py
+++ b/torch/distributions/utils.py
@@ -182,6 +182,7 @@ def __init__(self, wrapped):
def __get__(self, instance, obj_type=None):
if instance is None:
return self
- value = self.wrapped(instance)
+ with torch.enable_grad():
+ value = self.wrapped(instance)
setattr(instance, self.wrapped.__name__, value)
return value
| diff --git a/test/test_distributions.py b/test/test_distributions.py
--- a/test/test_distributions.py
+++ b/test/test_distributions.py
@@ -53,7 +53,7 @@
SoftmaxTransform,
StickBreakingTransform,
identity_transform)
-from torch.distributions.utils import _finfo, probs_to_logits, softmax
+from torch.distributions.utils import _finfo, probs_to_logits, softmax, lazy_property
TEST_NUMPY = True
try:
@@ -690,6 +690,31 @@ def test_enumerate_support_type(self):
except NotImplementedError:
pass
+ def test_lazy_property_grad(self):
+ x = torch.randn(1, requires_grad=True)
+
+ class Dummy(object):
+ @lazy_property
+ def y(self):
+ return x + 1
+
+ def test():
+ x.grad = None
+ Dummy().y.backward()
+ self.assertEqual(x.grad, torch.ones(1))
+
+ test()
+ with torch.no_grad():
+ test()
+
+ mean = torch.randn(2)
+ cov = torch.eye(2, requires_grad=True)
+ distn = MultivariateNormal(mean, cov)
+ with torch.no_grad():
+ distn.scale_tril
+ distn.scale_tril.sum().backward()
+ self.assertIsNotNone(cov.grad)
+
def test_has_examples(self):
distributions_with_examples = set(e.Dist for e in EXAMPLES)
for Dist in globals().values():
| [distributions] rsample().detach() and sample() yields different gradients
I have a MultivariateNormal distribution with loc defined as output of neural net (given input) and diagonal covariance matrix with trainable parameters (but does not depending on some input).
If I sample via `distr.rsample().detach()` and optimise sum of log_probs, `.backward()` provides correct gradients w.r.t. both loc and cov matrix params. But if I sample via `distr.sample()`, `.backward()` sets `None` gradients for cov matrix params.
Here is a minimal reproducing code:
```
import torch
from torch.distributions import MultivariateNormal
torch.manual_seed(124)
inp = torch.randn(3)
W = torch.randn((2, 3), requires_grad=True)
loc = W@inp
cov = torch.tensor([[1., 0.], [0., 2.]], requires_grad=True)
distr = MultivariateNormal(loc, cov)
y = distr.rsample().detach() # difference here
loss = distr.log_prob(y)
loss.backward()
W.grad, cov.grad
```
yields
```
(tensor([[-0.1991, -1.0776, -0.6339],
[ 0.3455, 1.8700, 1.1000]]),
tensor([[-0.2678, 0.0000],
[-0.8057, 0.4491]]))
```
while
```
import torch
from torch.distributions import MultivariateNormal
torch.manual_seed(124)
inp = torch.randn(3)
W = torch.randn((2, 3), requires_grad=True)
loc = W@inp
cov = torch.tensor([[1., 0.], [0., 2.]], requires_grad=True)
distr = MultivariateNormal(loc, cov)
x = distr.sample() # difference here
loss = distr.log_prob(x)
loss.backward()
W.grad, cov.grad
```
yields
```
(tensor([[-0.1991, -1.0776, -0.6339],
[ 0.3455, 1.8700, 1.1000]]),
None)
```
I am observing this problem both on macOS and Ubuntu systems.
| Thanks for reporting. I know the problem. The `MultivariateNormal` uses a lazily calculated property `scale_tril` to compute everything, including the samples, log prob, etc. The `sample()` func in distributions is just `rsample()` wrapped in `torch.no_grad` (see [here](https://github.com/pytorch/pytorch/blob/master/torch/distributions/distribution.py#L96-L97)). So when you do `distr.sample()`, it first calculates `scale_tril` **in a `no_grad` block**, and subsequent computations, which all use `scale_tril`, lose history.
Therefore, if you do
```py
import torch
from torch.distributions import MultivariateNormal
torch.manual_seed(124)
inp = torch.randn(3)
W = torch.randn((2, 3), requires_grad=True)
loc = W@inp
cov = torch.tensor([[1., 0.], [0., 2.]], requires_grad=True)
distr = MultivariateNormal(loc, cov)
distr.scale_tril # <------ force it to calculate scale_tril here
x = distr.sample() # difference here
loss = distr.log_prob(x)
loss.backward()
W.grad, cov.grad
```
It will give correct gradients.
Fix incoming. | 2018-05-19T19:35:34 |
pytorch/pytorch | 7,757 | pytorch__pytorch-7757 | [
"7255"
] | bb15a0830de9577b4f6bdcded5eac864f78701c2 | diff --git a/torch/optim/lr_scheduler.py b/torch/optim/lr_scheduler.py
--- a/torch/optim/lr_scheduler.py
+++ b/torch/optim/lr_scheduler.py
@@ -22,12 +22,6 @@ def __init__(self, optimizer, last_epoch=-1):
self.step(last_epoch + 1)
self.last_epoch = last_epoch
- def __getstate__(self):
- return self.state_dict()
-
- def __setstate__(self, state):
- self.load_state_dict(state)
-
def state_dict(self):
"""Returns the state of the scheduler as a :class:`dict`.
| torch.save does not work with _LRSchedulers
## Issue description
``torch.save`` does not work with ``torch.optim.lr_scheduler._LRScheduler``
## Code example
Offending code:
```
from torch import nn
from torch.optim import Adam
from torch.optim.lr_scheduler import MultiStepLR
import torch
net = nn.LSTM(10, 10)
optimizer = Adam(params=filter(lambda p: p.requires_grad, net.parameters()))
scheduler = MultiStepLR(optimizer, milestones=[30, 80], gamma=0.1)
scheduler.step() # This is Fine
# Check save and load into the directory work
torch.save(scheduler, 'scheduler.pt')
scheduler = torch.load('scheduler.pt')
scheduler.step() # AttributeError: 'MultiStepLR' object has no attribute 'optimizer'
```
Stack trace:
```
Traceback (most recent call last):
File "play.py", line 17, in <module>
scheduler.step() # AttributeError: 'MultiStepLR' object has no attribute 'optimizer'
File "/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py", line 55, in step
for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()):
AttributeError: 'MultiStepLR' object has no attribute 'optimizer'
````
## System Info
- PyTorch or Caffe2: PyTorch
- How you installed PyTorch (conda, pip, source): pip
- PyTorch version: 0.4.0
- Python version: 3.6.4
| This is due to _LRScheduler.state_dict explicitly excluding optimizer in state_dict(). https://github.com/pytorch/pytorch/blob/master/torch/optim/lr_scheduler.py#L37 Including it in state_dict() fixes the problem. Before I go ahead and make the PR, is there any other concern that we should not put optimizer in state_dict()? cc: @apaszke Thanks!
Yeah, `__getstate__` and `state_dict()` should really have different logic (with the first one including optimizer, while the second one would not).
Is including the optimizer a good idea though? I can imagine a code like this
```python
optimizer = th.load(...)
scheduler = th.load(...)
scheduler.step() # this will not change lr of the loaded optimizer, but of the internal copy?
```
The same thing happens if you serialize a model and its optimizer in two separate checkpoints, so I'm not very concerned about that. | 2018-05-22T11:00:59 |
|
pytorch/pytorch | 7,759 | pytorch__pytorch-7759 | [
"7722"
] | bb15a0830de9577b4f6bdcded5eac864f78701c2 | diff --git a/tools/autograd/gen_autograd_functions.py b/tools/autograd/gen_autograd_functions.py
--- a/tools/autograd/gen_autograd_functions.py
+++ b/tools/autograd/gen_autograd_functions.py
@@ -63,6 +63,12 @@
}
""")
+DERIVATIVE_MULTI_COPY_RANGE = CodeTemplate("""\
+ if (should_compute_output({ ${name}_ix })) {
+ copy_range(grad_inputs, ${name}_ix, std::get<${i}>(grad_result));
+ }
+""")
+
DERIVATIVE_MULTI = CodeTemplate("""\
if (should_compute_output({ ${idx_ranges} })) {
${grad_input_mask}
@@ -173,7 +179,7 @@ def emit_derivative(derivative):
idx_ranges = ', '.join("{}_ix".format(n) for n in var_names)
copy_ranges = []
for i, n in enumerate(var_names):
- copy_ranges.append("copy_range(grad_inputs, {}_ix, std::get<{}>(grad_result));".format(n, i))
+ copy_ranges.append(DERIVATIVE_MULTI_COPY_RANGE.substitute(name=n, i=i))
return DERIVATIVE_MULTI.substitute(
idx_ranges=idx_ranges, copy_ranges=copy_ranges,
derivative=formula,
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2266,6 +2266,19 @@ def test_set_requires_grad_only_for_floats_cuda(self):
def test_set_requires_grad_only_for_floats(self):
self._test_set_requires_grad_only_for_floats(self, False)
+ @unittest.skipIf(not torch.cuda.is_available(), "CUDA unavailable")
+ def test_rnn_backward_to_input_but_not_parameters_cuda(self):
+ # this checks whether it is possible to not require
+ # weight parameters, but require inputs, see #7722
+ dev = torch.device('cuda')
+ l = torch.nn.LSTM(2, 3).to(dev)
+ for p in l.parameters():
+ p.requires_grad = False
+ s = torch.randn(1, 1, 2, requires_grad=True, device=dev)
+ out, _ = l(s)
+ out.sum().backward()
+ self.assertFalse(s.grad is None or s.grad.abs().sum().item() == 0)
+
def index_variable(shape, max_indices):
if not isinstance(shape, tuple):
| Error in backprop when using frozen LSTM layers (new with 0.4.0)
## Issue description
The general situation is that we have a pretrained Language Model, and during a first phase we only want to train the new embedding layer we added before fine-tuning the whole thing. This worked fine in 0.3 but now sends an error message during back propagation. Minimal reproduction is to just create a simple model with a linear layer and an LSTM, freeze this second layer (by applying require_grads=False to its parameters) and try to compute a back-propagation.
## Code example
See [here](https://github.com/sgugger/Deep-Learning/blob/master/Bug%20with%20frozen%20LSTM%20layer.ipynb)
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 9.0 OS: Microsoft Windows 10 Home
GCC version: Could not collect
CMake version: Could not collect Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
Pytorch was installed with conda, the bug also appears on my linux instances.
Thanks for your help!
| to help give local context, the notebook error is:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-52a0569421b1> in <module>()
----> 1 loss.backward()
~\Anaconda3\envs\fastai\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to ``False``.
92 """
---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):
~\Anaconda3\envs\fastai\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
87 Variable._execution_engine.run_backward(
88 tensors, grad_tensors, retain_graph, create_graph,
---> 89 allow_unreachable=True) # allow_unreachable flag
90
91
RuntimeError: inconsistent range for TensorList output
```
The error seems to be thrown at `tools\autograd\templates\Functions.cpp` line 47. Not sure what's happening there.
I can repro this on master on a linux box as well.
Repro script: (I took the ipython notebook in the code example and copy pasted the lines from there):
```
import torch
from torch.autograd import Variable as V
import torch.nn as nn
import torch.nn.functional as F
model = nn.Sequential(nn.Linear(10,20), nn.ReLU(inplace=True),nn.LSTM(20,5, 1)).cuda()
for param in list(model.parameters())[2:]:
param.requires_grad=False
x = torch.randn(2,4,10).cuda()
x.requires_grad = True
z = model(x)
y = torch.Tensor([0,1,2,3, 0,1,2,3]).long().cuda()
loss = F.cross_entropy(z[0].view(-1,5),y)
loss.backward()
```
I hit this to.
@ailzhang If you are busy, I can have a look, too.
My repro is
```
import torch
print (torch.__version__)
dev = torch.device('cuda')
l = torch.nn.LSTM(2, 3).to(dev)
for p in l.parameters():
p.requires_grad = False
s = torch.randn(1, 1, 2, requires_grad=True, device=dev)
out, _ = l(s)
out.sum().backward()
```
As a workaround, you can disable the cudnn backend.
So you need to return a list of undefined tensors rather than an empty tensor list.
So one could add an else block with `dw.resize(weight.size())` to the conditional `dw` calculation in [_cudnn_rnn_backward](https://github.com/pytorch/pytorch/blob/bb15a0830de9577b4f6bdcded5eac864f78701c2/aten/src/ATen/native/cudnn/RNN.cpp#L995).
@t-vi Cool, feel free to propose a PR for it. Thanks! | 2018-05-22T12:57:12 |
pytorch/pytorch | 7,829 | pytorch__pytorch-7829 | [
"6988"
] | 147cc05cf5e397eb7e1543edfe7c772754a66d7a | diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py
--- a/torch/nn/modules/loss.py
+++ b/torch/nn/modules/loss.py
@@ -53,13 +53,14 @@ class L1Loss(_Loss):
Args:
size_average (bool, optional): By default, the losses are averaged
- over observations for each minibatch. However, if the field
- size_average is set to ``False``, the losses are instead summed for
- each minibatch. Ignored when reduce is ``False``. Default: ``True``
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed
- for each minibatch. When reduce is ``False``, the loss function returns
- a loss per input/target element instead and ignores size_average.
- Default: ``True``
+ for each minibatch. When reduce is ``False``, the loss function returns
+ a loss per input/target element instead and ignores size_average.
+ Default: ``True``
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
@@ -131,13 +132,13 @@ class NLLLoss(_WeightedLoss):
Args:
weight (Tensor, optional): a manual rescaling weight given to each
- class. If given, it has to be a Tensor of size `C`. Otherwise, it is
- treated as if having all ones.
+ class. If given, it has to be a Tensor of size `C`. Otherwise, it is
+ treated as if having all ones.
size_average (bool, optional): By default, the losses are averaged
- over observations for each minibatch with weights set by
- :attr:`weight`. However, if the field :attr:`size_average` is set to
- ``False``, the losses are instead summed for each minibatch. Ignored
- when :attr:`reduce` is ``False``. Default: ``True``
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
ignore_index (int, optional): Specifies a target value that is ignored
and does not contribute to the input gradient. When
:attr:`size_average` is ``True``, the loss is averaged over
@@ -225,9 +226,11 @@ class PoissonNLLLoss(_Loss):
.. math::
\text{target}*\log(\text{target}) - \text{target} + 0.5 * \log(2\pi\text{target}).
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field `size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
eps (float, optional): Small value to avoid evaluation of :math:`\log(0)` when
:attr:`log_input == False`. Default: 1e-8
reduce (bool, optional): By default, the losses are averaged
@@ -312,8 +315,10 @@ class KLDivLoss(_Loss):
Args:
size_average (bool, optional): By default, the losses are averaged
- for each minibatch over observations **as well as** over
- dimensions. However, if ``False`` the losses are instead summed.
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is ``False``, returns a loss per input/target
@@ -363,9 +368,10 @@ class MSELoss(_Loss):
Args:
size_average (bool, optional): By default, the losses are averaged
- over observations for each minibatch. However, if the field
- size_average is set to ``False``, the losses are instead summed for
- each minibatch. Only applies when reduce is ``True``. Default: ``True``
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is ``False``, returns a loss per input/target
@@ -419,9 +425,10 @@ class BCELoss(_WeightedLoss):
of each batch element. If given, has to be a Tensor of size
"nbatch".
size_average (bool, optional): By default, the losses are averaged
- over observations for each minibatch. However, if the field
- size_average is set to ``False``, the losses are instead summed for
- each minibatch. Default: ``True``
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on size_average. When reduce
is False, returns a loss per input/target element instead and ignores
@@ -483,9 +490,10 @@ class BCEWithLogitsLoss(_Loss):
of each batch element. If given, has to be a Tensor of size
"nbatch".
size_average (bool, optional): By default, the losses are averaged
- over observations for each minibatch. However, if the field
- size_average is set to ``False``, the losses are instead summed for
- each minibatch. Default: ``True``
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on size_average. When reduce
is False, returns a loss per input/target element instead and ignores
@@ -547,10 +555,11 @@ class HingeEmbeddingLoss(_Loss):
Args:
margin (float, optional): Has a default value of `1`.
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -591,10 +600,11 @@ class MultiLabelMarginLoss(_Loss):
This allows for different samples to have variable amounts of target classes
Args:
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -641,12 +651,14 @@ class SmoothL1Loss(_Loss):
Args:
size_average (bool, optional): By default, the losses are averaged
- over all elements. However, if the field size_average is set to ``False``,
- the losses are instead summed. Ignored when reduce is ``False``. Default: ``True``
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed
- over elements. When reduce is ``False``, the loss function returns
- a loss per input/target element instead and ignores size_average.
- Default: ``True``
+ over elements. When reduce is ``False``, the loss function returns
+ a loss per input/target element instead and ignores size_average.
+ Default: ``True``
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
@@ -674,10 +686,11 @@ class SoftMarginLoss(_Loss):
\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}
Args:
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -737,9 +750,11 @@ class CrossEntropyLoss(_WeightedLoss):
Args:
weight (Tensor, optional): a manual rescaling weight given to each class.
If given, has to be a Tensor of size `C`
- size_average (bool, optional): By default, the losses are averaged over observations for each minibatch.
- However, if the field `size_average` is set to ``False``, the losses are
- instead summed for each minibatch. Ignored if reduce is ``False``.
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field size_average is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
ignore_index (int, optional): Specifies a target value that is ignored
and does not contribute to the input gradient. When `size_average` is
``True``, the loss is averaged over non-ignored targets.
@@ -792,12 +807,13 @@ class MultiLabelSoftMarginLoss(_WeightedLoss):
Args:
weight (Tensor, optional): a manual rescaling weight given to each
- class. If given, it has to be a Tensor of size `C`. Otherwise, it is
- treated as if having all ones.
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ class. If given, it has to be a Tensor of size `C`. Otherwise, it is
+ treated as if having all ones.
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -836,10 +852,11 @@ class CosineEmbeddingLoss(_Loss):
Args:
margin (float, optional): Should be a number from `-1` to `1`, `0` to `0.5`
is suggested. If `margin` is missing, the default value is `0`.
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -870,10 +887,11 @@ class MarginRankingLoss(_Loss):
Args:
margin (float, optional): Has a default value of `0`.
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -923,10 +941,11 @@ class MultiMarginLoss(_WeightedLoss):
weight (Tensor, optional): a manual rescaling weight given to each
class. If given, it has to be a Tensor of size `C`. Otherwise, it is
treated as if having all ones.
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
@@ -972,10 +991,11 @@ class TripletMarginLoss(_Loss):
swap (float, optional): The distance swap is described in detail in the paper
`Learning shallow convolutional feature descriptors with triplet losses` by
V. Balntas, E. Riba et al. Default: ``False``.
- size_average (bool, optional): By default, the losses are averaged over
- observations for each minibatch. However, if the field :attr:`size_average`
- is set to ``False``, the losses are instead summed for each minibatch.
- Default: ``True``
+ size_average (bool, optional): By default, the losses are averaged
+ over each loss element in the batch. Note that for some losses, there
+ multiple elements per sample. If the field :attr:`size_average` is set to
+ ``False``, the losses are instead summed for each minibatch. Ignored
+ when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed over
observations for each minibatch depending on :attr:`size_average`. When
:attr:`reduce` is ``False``, returns a loss per batch element instead and
| The loss computation with `size_average` should average over batch example or batch element ?
Traditionally, when we have a batched data with shape `(N, D)` where `N` is batch size and `D` is data dimension. The losses are often calculated for each training example say
```
L_i = loss(X_i), i = 1, ..., N
```
And then total loss is averaged over the batch size
```
L = (1/N)*sum(L_i)
```
However, it seems it is not what `nn.*Loss` is doing for the flag `size_average=True/False`
e.g.
```python
import torch
import torch.nn.functional as F
input = torch.randn(3, 2)
target = torch.rand(3, 2)
print(input)
print(target)
full_loss = F.mse_loss(input, target, reduce=False)
print(full_loss)
loss_sum = F.mse_loss(input, target, size_average=False)
loss_mean = F.mse_loss(input, target, size_average=True)
print(loss_sum)
print(loss_mean)
batch_loss = full_loss.sum(dim=-1)
correct_loss_mean = batch_loss.mean()
correct_loss_sum = batch_loss.sum()
print(correct_loss_mean)
print(correct_loss_sum)
assert correct_loss_sum == loss_sum
assert correct_loss_mean == loss_mean
```
it seems the `size_average` does not properly average the loss over the batch of examples, but averaged over all dimensions.
| `size_average` averages over "each atomic element for which loss is computed for". For `mse_loss` `size_average` divides by _all_ elements. For something like `NLLLoss`, `size_average` divides by number of minibatches (`tensor.size(0)`) because each row in the tensor results in a loss.
We'll definitely make the `size_average` behavior clearer in the docs: this question has come up in a number of issues in the past. If you'd like different behavior, you can set `size_average=False` and divide by the number of batches.
@zou3519 Thanks a lot for the explanation. Just to make sure I don't take it wrong. By setting `size_average=True` by default to average over all elements rather than firstly summing up losses for each training example and then to average. This does not alter the gradient direction but indeed a smaller scale of the gradient. Will it have potential effects to slow down the training ? Because averaging over all elements gets smaller loss, leading to smaller gradient norm.
We should at some point consider if we want to change this behavior. This is indeed confusing and in some cases doesn't do what you want (KL Divergence for example https://github.com/pytorch/pytorch/pull/7006).
Breaking everyones code is not a good thing, but maybe we could deprecate the `size_average` argument in favor of a new one, that consistently divides by the batch size?
@fmassa I'm a bit afraid that adding another variant of `size_average` is going to make the interface very confusing.
@apaszke I think it might be worth listing the cases where `size_average` is actually doing what most people want it to do. Maybe the current behavior of `size_average` is already good enough.
That being said, now that we have `reduce=False` for all losses (and once we get reduction over multiple dimensions merged in master), it will be super easy to let users control what they want to do.
@fmassa It seems to me that the more common behavior of `size_average` should be firstly summing up the losses across data dimension and then averaging over batch. But yes, as you pointed out, the flag `reduce=False` provides user full flexibility.
I guess it might be good either to mention clearly to the user about the current behavior of `size_average` is to average over all elements, or change the behavior of `size_average` to the one described above which is more commonly used to train NN models and thus more user-friendly, and then the 'element-wise' averaging behavior leave to the user.
@apaszke
Or maybe have one more flag, say `size_average_batch` and `size_average_all` to provide both behaviors.
Does it make more sense to average over batch always? It seems like that's the functionality that most users expect and want. | 2018-05-24T23:32:12 |
|
pytorch/pytorch | 7,873 | pytorch__pytorch-7873 | [
"8333",
"7722",
"7222"
] | 607b86f6033456bcbd6864bdeff4226f38fad4cd | diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -4392,6 +4392,33 @@ def parse_kwargs(desc):
[-0.5872, 0.6932]])
""")
+add_docstr(torch.flip,
+ r"""
+flip(input, dims) -> Tensor
+
+Reverse the order of a n-D tensor along given axis in dims.
+
+Args:
+ input (Tensor): the input tensor
+ dims (a list or tuple): axis to flip on
+
+Example::
+
+ >>> x = torch.arange(8).view(2, 2, 2)
+ >>> x
+ tensor([[[ 0, 1],
+ [ 2, 3]],
+
+ [[ 4, 5],
+ [ 6, 7]]])
+ >>> torch.flip(x, [0, 1])
+ tensor([[[ 6, 7],
+ [ 4, 5]],
+
+ [[ 2, 3],
+ [ 0, 1]]])
+""")
+
add_docstr(torch.take,
r"""
take(input, indices) -> Tensor
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2509,6 +2509,10 @@ class dont_convert(tuple):
('reshape', (S,), (S,), '1d'),
('reshape', (), (dont_convert(()),), 'scalar_to_scalar'),
('reshape', (), (1,), 'scalar_to_1d'),
+ ('flip', (S, S, S), ([0],), 'd0'),
+ ('flip', (S, S, S), ([0, 1, 2],), 'd012'),
+ ('flip', (S, S, S), ([0, 2],), 'd02'),
+ ('flip', (S, S, S), ([2, 0],), 'd20'),
('view_as', (S, S, S), (non_differentiable(torch.rand(S * S, S)),)),
('view_as', (), (non_differentiable(torch.tensor(5.5)),), 'scalar'),
('view_as', (), (non_differentiable(torch.rand(1, 1)),), 'scalar_to_dims'),
diff --git a/test/test_cuda.py b/test/test_cuda.py
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -409,6 +409,10 @@ def tmp(t):
('zero', small_3d, lambda t: [],),
('zeros', small_3d, lambda t: [1, 2, 3, 4],),
('eye', small_2d, lambda t: [3, 4],),
+ ('flip', small_3d, lambda t: [0], 'd0', types, True),
+ ('flip', small_3d, lambda t: [0, 1, 2], 'd012', types, True),
+ ('flip', small_3d, lambda t: [0, 2], 'd02', types, True),
+ ('flip', small_3d, lambda t: [2, 0], 'd20', types, True),
('rsqrt', lambda t: constant_tensor_add(1, small_3d(t)), lambda t: [], None, float_types),
('sinh', lambda t: tensor_clamp(small_3d(t), -1, 1), lambda t: [], None, float_types),
('tan', lambda t: tensor_clamp(small_3d(t), -1, 1), lambda t: [], None, float_types),
@@ -1372,6 +1376,9 @@ def test_gesv_batched_dims(self):
def test_view(self):
TestTorch._test_view(self, lambda t: t.cuda())
+ def test_flip(self):
+ TestTorch._test_flip(self, use_cuda=True)
+
def test_signal_window_functions(self):
TestTorch._test_signal_window_functions(self, device=torch.device('cuda'))
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -5953,6 +5953,51 @@ def test_permute(self):
self.assertEqual(perm, new)
self.assertEqual(x.size(), orig)
+ @staticmethod
+ def _test_flip(self, use_cuda=False):
+ if use_cuda:
+ cuda = torch.device("cuda")
+ data = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8], device=cuda).view(2, 2, 2)
+ # large data testing
+ large_data = torch.arange(0, 100000000, device=cuda).view(10000, 10000)
+ large_data.flip([0, 1])
+ else:
+ data = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(2, 2, 2)
+
+ self.assertEqual(torch.tensor([5, 6, 7, 8, 1, 2, 3, 4]).view(2, 2, 2), data.flip(0))
+ self.assertEqual(torch.tensor([3, 4, 1, 2, 7, 8, 5, 6]).view(2, 2, 2), data.flip(1))
+ self.assertEqual(torch.tensor([2, 1, 4, 3, 6, 5, 8, 7]).view(2, 2, 2), data.flip(2))
+ self.assertEqual(torch.tensor([7, 8, 5, 6, 3, 4, 1, 2]).view(2, 2, 2), data.flip(0, 1))
+ self.assertEqual(torch.tensor([8, 7, 6, 5, 4, 3, 2, 1]).view(2, 2, 2), data.flip(0, 1, 2))
+
+ # check for permute
+ self.assertEqual(torch.tensor([6, 5, 8, 7, 2, 1, 4, 3]).view(2, 2, 2), data.flip(0, 2))
+ self.assertEqual(torch.tensor([6, 5, 8, 7, 2, 1, 4, 3]).view(2, 2, 2), data.flip(2, 0))
+
+ # not allow flip on the same dim more than once
+ self.assertRaises(RuntimeError, lambda: data.flip(0, 1, 1))
+ # not allow empty list as input
+ self.assertRaises(TypeError, lambda: data.flip())
+ # not allow size of flip dim > total dims
+ self.assertRaises(RuntimeError, lambda: data.flip(0, 1, 2, 3))
+ # not allow dim < 0
+ self.assertRaises(RuntimeError, lambda: data.flip(-1))
+ # not allow dim > max dim
+ self.assertRaises(RuntimeError, lambda: data.flip(3))
+
+ # test for non-contiguous case
+ if use_cuda:
+ expanded_data = torch.arange(1, 4, device=cuda).view(3, 1).expand(3, 2)
+ tranposed_data = torch.arange(1, 9, device=cuda).view(2, 2, 2).transpose(0, 1)
+ else:
+ expanded_data = torch.arange(1, 4).view(3, 1).expand(3, 2)
+ tranposed_data = torch.arange(1, 9).view(2, 2, 2).transpose(0, 1)
+ self.assertEqual(torch.tensor([3, 3, 2, 2, 1, 1]).view(3, 2), expanded_data.flip(0))
+ self.assertEqual(torch.tensor([8, 7, 4, 3, 6, 5, 2, 1]).view(2, 2, 2), tranposed_data.flip(0, 1, 2))
+
+ def test_flip(self):
+ self._test_flip(self, use_cuda=False)
+
def test_storage(self):
v = torch.randn(3, 5)
self.assertEqual(v.storage()[0], v.data[0][0])
| Cannot print sparse tensors from ATen
Fix coming.
Error in backprop when using frozen LSTM layers (new with 0.4.0)
## Issue description
The general situation is that we have a pretrained Language Model, and during a first phase we only want to train the new embedding layer we added before fine-tuning the whole thing. This worked fine in 0.3 but now sends an error message during back propagation. Minimal reproduction is to just create a simple model with a linear layer and an LSTM, freeze this second layer (by applying require_grads=False to its parameters) and try to compute a back-propagation.
## Code example
See [here](https://github.com/sgugger/Deep-Learning/blob/master/Bug%20with%20frozen%20LSTM%20layer.ipynb)
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 9.0 OS: Microsoft Windows 10 Home
GCC version: Could not collect
CMake version: Could not collect Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
Pytorch was installed with conda, the bug also appears on my linux instances.
Thanks for your help!
[Docs] [Low Priority] Pytorch docs in Google search points to master/unstable
When I googled `pytorch nn functional`, this came up as the first link: `pytorch.org/docs/master/_modules/torch/nn/functional.html`
This is, as you know, docs for the master/unstable version. On clicking the link which says it'll take me to the docs for the stable release, I get sent to the home page of the docs, and now I have to search for the page again.
This can be solved by adding a canonical link to their non-stable documentation, i.e. something like this: `<link rel="canonical" href="https://pytorch.org/docs/stable/nn.html#torch-nn-functional">`.
|
to help give local context, the notebook error is:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-52a0569421b1> in <module>()
----> 1 loss.backward()
~\Anaconda3\envs\fastai\lib\site-packages\torch\tensor.py in backward(self, gradient, retain_graph, create_graph)
91 products. Defaults to ``False``.
92 """
---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph)
94
95 def register_hook(self, hook):
~\Anaconda3\envs\fastai\lib\site-packages\torch\autograd\__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
87 Variable._execution_engine.run_backward(
88 tensors, grad_tensors, retain_graph, create_graph,
---> 89 allow_unreachable=True) # allow_unreachable flag
90
91
RuntimeError: inconsistent range for TensorList output
```
The error seems to be thrown at `tools\autograd\templates\Functions.cpp` line 47. Not sure what's happening there.
I can repro this on master on a linux box as well.
Repro script: (I took the ipython notebook in the code example and copy pasted the lines from there):
```
import torch
from torch.autograd import Variable as V
import torch.nn as nn
import torch.nn.functional as F
model = nn.Sequential(nn.Linear(10,20), nn.ReLU(inplace=True),nn.LSTM(20,5, 1)).cuda()
for param in list(model.parameters())[2:]:
param.requires_grad=False
x = torch.randn(2,4,10).cuda()
x.requires_grad = True
z = model(x)
y = torch.Tensor([0,1,2,3, 0,1,2,3]).long().cuda()
loss = F.cross_entropy(z[0].view(-1,5),y)
loss.backward()
```
I hit this to.
@ailzhang If you are busy, I can have a look, too.
My repro is
```
import torch
print (torch.__version__)
dev = torch.device('cuda')
l = torch.nn.LSTM(2, 3).to(dev)
for p in l.parameters():
p.requires_grad = False
s = torch.randn(1, 1, 2, requires_grad=True, device=dev)
out, _ = l(s)
out.sum().backward()
```
As a workaround, you can disable the cudnn backend.
So you need to return a list of undefined tensors rather than an empty tensor list.
So one could add an else block with `dw.resize(weight.size())` to the conditional `dw` calculation in [_cudnn_rnn_backward](https://github.com/pytorch/pytorch/blob/bb15a0830de9577b4f6bdcded5eac864f78701c2/aten/src/ATen/native/cudnn/RNN.cpp#L995).
@t-vi Cool, feel free to propose a PR for it. Thanks!
Turns out the better fix checks whether we want grad in more detail.
I've noticed this before but I didn't know there was a way to solve it. Thanks @sherjilozair for the idea; I'll try it out and see how it goes -- this would definitely be a great improvement. (You should also feel free to submit a PR if you'd like!)
This is what Python documentation uses. See, eg.: view-source:https://docs.python.org/3.8/library/functions.html
It would take time for Google's SEO to update though. | 2018-05-26T04:40:26 |
pytorch/pytorch | 7,886 | pytorch__pytorch-7886 | [
"7882"
] | 45cdb63d8b8022ab26f073d3bed718e75d2aedaf | diff --git a/torch/utils/data/dataloader.py b/torch/utils/data/dataloader.py
--- a/torch/utils/data/dataloader.py
+++ b/torch/utils/data/dataloader.py
@@ -246,7 +246,7 @@ def __init__(self, loader):
self.sample_iter = iter(self.batch_sampler)
- base_seed = torch.LongTensor(1).random_()[0]
+ base_seed = torch.LongTensor(1).random_().item()
if self.num_workers > 0:
self.worker_init_fn = loader.worker_init_fn
| diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -116,6 +116,15 @@ def __len__(self):
return 10
+class RandomDatasetMock(object):
+
+ def __getitem__(self, index):
+ return torch.tensor([torch.rand(1).item(), random.uniform(0, 1)])
+
+ def __len__(self):
+ return 1000
+
+
class TestCheckpoint(TestCase):
# Test whether checkpoint is being triggered or not. For this, we check
@@ -233,6 +242,20 @@ def setUp(self):
self.dataset = torch.randn(5, 3, 3, 2)
self.batch_size = 3
+ def test_random_seed(self):
+ def run():
+ dataloader = torch.utils.data.DataLoader(RandomDatasetMock(),
+ batch_size=2,
+ num_workers=4,
+ shuffle=True)
+ return next(iter(dataloader))
+
+ torch.manual_seed(2018)
+ x1 = run()
+ torch.manual_seed(2018)
+ x2 = run()
+ self.assertEqual(x1, x2)
+
def test_single_keep(self):
dataloader = torch.utils.data.DataLoader(self.dataset,
batch_size=self.batch_size,
| [Pytorch] DataLoader and python random module
Even with seeding, the following script print different ouputs for `random.uniform` at the different runs. `random` module is even reseeded [here](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L86).
Outputs for `torch.rand` are the same though.
```python
import torch
import random
from torch.utils.data import Dataset, DataLoader
class Data(Dataset):
def __len__(self):
return 10000
def __getitem__(self, index):
print(index, torch.rand(2, 2).sum().item(), random.uniform(0, 1))
return 1
seed = 2018
random.seed(seed)
torch.manual_seed(seed)
loader = DataLoader(Data(), num_workers=4, shuffle=True)
for x in loader:
print('-'*10)
break
```
First run
```
4717 2.202341079711914 0.9952153654478976
4607 2.3166141510009766 0.6813692345925851
4194 1.9806793928146362 0.6281118075687344
2595 2.95841383934021 0.8414756141240453
4691 0.9809015393257141 0.7622458327788627
9868 2.521920680999756 0.5253262288522356
7367 2.333574056625366 0.35079311205192487
9490 3.02830171585083 0.16235006783937567
----------
6759 3.1252167224884033 0.4424384676992986
```
Next run
```
4607 2.3166141510009766 0.15198273935290807
4194 1.9806793928146362 0.36414129463658884
4691 0.9809015393257141 0.027569260048619926
4717 2.202341079711914 0.5512619092026773
7367 2.333574056625366 0.7932627754589792
9490 3.02830171585083 0.19395324967791994
9868 2.521920680999756 0.5497794735158222
2595 2.95841383934021 0.782779934368899
----------
6759 3.1252167224884033 0.7098308465010348
```
- Ubuntu 16.04
- Python 3.6
- PyTorch version: 0.4
| This is because you used workers. Seed in `worker_init_fn`.
Relevant doc (even if it says when worker return **identical** numbers): https://pytorch.org/docs/master/notes/faq.html#my-data-loader-workers-return-identical-random-numbers
| 2018-05-27T21:14:52 |
pytorch/pytorch | 7,921 | pytorch__pytorch-7921 | [
"7178"
] | 769f5f7cfe6432ac8fcd84e54c24bb6ffd7b877b | diff --git a/torch/serialization.py b/torch/serialization.py
--- a/torch/serialization.py
+++ b/torch/serialization.py
@@ -70,6 +70,19 @@ def _cuda_deserialize(obj, location):
device = 0
else:
device = max(int(location[5:]), 0)
+
+ if not torch.cuda.is_available():
+ raise RuntimeError('Attempting to deserialize object on a CUDA '
+ 'device but torch.cuda.is_available() is False. '
+ 'If you are running on a CPU-only machine, '
+ 'please use torch.load with map_location=\'cpu\' '
+ 'to map your storages to the CPU.')
+ if device >= torch.cuda.device_count():
+ raise RuntimeError('Attempting to deserialize object on CUDA device '
+ '{} but torch.cuda.device_count() is {}. Please use '
+ 'torch.load with map_location to map your storages '
+ 'to an existing device.'.format(
+ device, torch.cuda.device_count()))
return obj.cuda(device)
| diff --git a/test/test_cuda.py b/test/test_cuda.py
--- a/test/test_cuda.py
+++ b/test/test_cuda.py
@@ -1,3 +1,4 @@
+import io
import math
import tempfile
import re
@@ -9,7 +10,7 @@
import torch.cuda.comm as comm
from test_torch import TestTorch
-from common import TestCase, get_gpu_type, to_gpu, freeze_rng_state, run_tests
+from common import TestCase, get_gpu_type, to_gpu, freeze_rng_state, run_tests, PY3
HAS_CUDA = True
if not torch.cuda.is_available():
@@ -1099,6 +1100,20 @@ def test_cat_bad_input_sizes(self):
z = torch.randn(2, 2, 1).cuda()
self.assertRaises(RuntimeError, lambda: torch.cat([x, y, z], dim=1))
+ @unittest.skipIf(torch.cuda.device_count() >= 10, "Loading a cuda:9 tensor")
+ @unittest.skipIf(not PY3, "Tensor was serialized with Python 3")
+ def test_load_nonexistent_device(self):
+ # Setup: create a serialized file object with a 'cuda:9' restore location
+ tensor = torch.randn(2, device='cuda')
+ buf = io.BytesIO()
+ torch.save(tensor, buf)
+ # NB: this might not work in the future if serialization changes
+ buf = io.BytesIO(buf.getvalue().replace(b'cuda:0', b'cuda:9'))
+
+ msg = r'Attempting to deserialize object on CUDA device 9'
+ with self.assertRaisesRegex(RuntimeError, msg):
+ _ = torch.load(buf)
+
def test_serialization(self):
x = torch.randn(4, 4).cuda()
with tempfile.NamedTemporaryFile() as f:
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -6446,6 +6446,30 @@ def check_map_locations(map_locations, tensor_class, intended_device):
torch.device('cuda', torch.cuda.device_count() - 1)
)
+ @unittest.skipIf(torch.cuda.is_available(), "Testing torch.load on CPU-only machine")
+ @unittest.skipIf(not PY3, "Test tensors were serialized using python 3")
+ def test_load_nonexistent_device(self):
+ # Setup: create a serialized file object with a 'cuda:0' restore location
+ # The following was generated by saving a torch.randn(2, device='cuda') tensor.
+ serialized = (b'\x80\x02\x8a\nl\xfc\x9cF\xf9 j\xa8P\x19.\x80\x02M\xe9'
+ b'\x03.\x80\x02}q\x00(X\x10\x00\x00\x00protocol_versionq'
+ b'\x01M\xe9\x03X\r\x00\x00\x00little_endianq\x02\x88X\n'
+ b'\x00\x00\x00type_sizesq\x03}q\x04(X\x05\x00\x00\x00shortq'
+ b'\x05K\x02X\x03\x00\x00\x00intq\x06K\x04X\x04\x00\x00\x00'
+ b'longq\x07K\x04uu.\x80\x02ctorch._utils\n_rebuild_tensor_v2'
+ b'\nq\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage'
+ b'\nq\x02X\x0e\x00\x00\x0094919395964320q\x03X\x06\x00\x00'
+ b'\x00cuda:0q\x04K\x02Ntq\x05QK\x00K\x02\x85q\x06K\x01\x85q'
+ b'\x07\x89Ntq\x08Rq\t.\x80\x02]q\x00X\x0e\x00\x00\x00'
+ b'94919395964320q\x01a.\x02\x00\x00\x00\x00\x00\x00\x00\xbb'
+ b'\x1f\x82\xbe\xea\x81\xd1>')
+
+ buf = io.BytesIO(serialized)
+
+ error_msg = r'Attempting to deserialize object on a CUDA device'
+ with self.assertRaisesRegex(RuntimeError, error_msg):
+ _ = torch.load(buf)
+
def test_serialization_filelike_api_requirements(self):
filemock = FilelikeMock(b'', has_readinto=False)
tensor = torch.randn(3, 5)
| Should torch.load and torch.save take a device (since PyTorch 0.4)
Reading the [PyTorch 0.4.0 migration guide](https://pytorch.org/2018/04/22/0_4_0-migration-guide.html) I came across the [`torch.device`](https://pytorch.org/docs/stable/tensor_attributes.html) abstraction.
It is great for creating a device once and then passing it around and using the `.to` functions, e.g. as in:
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = data.to(device)
```
The [`torch.load`](https://github.com/pytorch/pytorch/blob/d564ecb4a515e34184c976617f57b2eb01665660/torch/serialization.py#L241) function does not take a device, though. Instead it takes a `map_location` argument which can either be a lambda, a mapping, or since https://github.com/pytorch/pytorch/pull/4203 it can be a string like `'cpu'`.
Now the question is why are there these two different concepts and can they be unified into one device abstraction? Otherwise we can pass the device around _except_ for serialization where we need to transform the device abstraction into a `map_location` parameter.
Can we unify these concepts behind an API like the following?
```python
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
restored = torch.load('model.pth', device=device)
```
Related: https://github.com/pytorch/pytorch/issues/6630 - `torch.save` should also take a device
| Yes, this sounds like a good idea. We also have a similar issue where torch.cuda device managers currently only take ordinals, but should be able to take devices, strings. | 2018-05-29T15:57:57 |
pytorch/pytorch | 7,959 | pytorch__pytorch-7959 | [
"7947"
] | 9b1abd2f81a3bdb653d20dca5b9e709bd5f3ed5d | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -891,21 +891,25 @@ def _gumbel_softmax_sample(logits, tau=1, eps=1e-10):
def gumbel_softmax(logits, tau=1, hard=False, eps=1e-10):
- """
+ r"""
Sample from the Gumbel-Softmax distribution and optionally discretize.
+
Args:
- logits: `[batch_size, n_class]` unnormalized log-probs
+ logits: `[batch_size, num_features]` unnormalized log probabilities
tau: non-negative scalar temperature
- hard: if ``True``, take `argmax`, but differentiate w.r.t. soft sample y
+ hard: if ``True``, the returned samples will be discretized as one-hot vectors,
+ but will be differentiated as if it is the soft sample in autograd
+
Returns:
- [batch_size, n_class] sample from the Gumbel-Softmax distribution.
- If hard=True, then the returned sample will be one-hot, otherwise it will
- be a probability distribution that sums to 1 across classes
+ Sampled tensor of shape ``batch_size x num_features`` from the Gumbel-Softmax distribution.
+ If ``hard=True``, the returned samples will be one-hot, otherwise they will
+ be probability distributions that sum to 1 across features
Constraints:
- - this implementation only works on batch_size x num_features tensor for now
- based on
+ - Currently only work on 2D input :attr:`logits` tensor of shape ``batch_size x num_features``
+
+ Based on
https://github.com/ericjang/gumbel-softmax/blob/3c8584924603869e90ca74ac20a6a03d99a91ef9/Categorical%20VAE.ipynb ,
(MIT license)
"""
@@ -983,6 +987,7 @@ def linear(input, weight, bias=None):
Applies a linear transformation to the incoming data: :math:`y = xA^T + b`.
Shape:
+
- Input: :math:`(N, *, in\_features)` where `*` means any number of
additional dimensions
- Weight: :math:`(out\_features, in\_features)`
@@ -1011,29 +1016,28 @@ def embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2,
The input to the module is a list of indices, and the embedding matrix,
and the output is the corresponding word embeddings.
+ See :class:`torch.nn.Embedding` for more details.
+
Args:
- input: tensor, containing indices into the embedding matrix
- weight:
+ input (LongTensor): Tensor containing indices into the embedding matrix
+ weight (Tensor): The embedding matrix
Number of rows should correspond to the maximum possible index + 1,
number of columns is the embedding size
- padding_idx (int, optional): Entries at the given index do not contribute to the gradient
- max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this
- norm_type (float, optional): The p of the p-norm to compute for the max_norm option
- scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the frequency of
- the words in the mini-batch.
- sparse (boolean, optional): if ``True``, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for
- more details regarding sparse gradients.
+ padding_idx (int, optional): If given, pads the output with the embedding vector at :attr:`padding_idx`
+ (initialized to zeros) whenever it encounters the index.
+ max_norm (float, optional): If given, will renormalize the embedding vectors to have a norm lesser than
+ this before extracting. Note: this will modify :attr:`weight` in-place.
+ norm_type (float, optional): The p of the p-norm to compute for the max_norm option. Default ``2``.
+ scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency of
+ the words in the mini-batch. Default ``False``.
+ sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` will be a sparse tensor. See Notes under
+ :class:`torch.nn.Embedding` for more details regarding sparse gradients.
Shape:
- - Input: LongTensor `(N, W)`, N = mini-batch, W = number of indices to extract per mini-batch
- - Embedding_matrix: FloatTensor `(V, embedding_dim)`, V = maximum index + 1, embedding_dim = embedding size
- - Output: `(N, W, embedding_dim)`
-
- Notes:
- It is advised to only use `sparse=True` if `embedding_matrix` is a leaf Tensor,
- since some autograd functions may not propagate sparse gradients correctly.
- Additionally, keep in mind that only a limited number of optimizers support
- sparse gradients: currently it's :class:`optim.SGD` (`CUDA` and `CPU`), and :class:`optim.Adagrad` (`CPU`)
+ - Input: LongTensor of arbitrary shape containing the indices to extract
+ - Weight: Embedding matrix of floating point type with shape `(V, embedding_dim)`,
+ where V = maximum index + 1 and embedding_dim = the embedding size
+ - Output: `(*, embedding_dim)`, where `*` is the input shape
Examples::
@@ -1078,87 +1082,103 @@ def embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2,
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
-def embedding_bag(embedding_matrix, indices, offsets=None,
- max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False):
+def embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2,
+ scale_grad_by_freq=False, mode='mean', sparse=False):
r"""Computes sums or means of 'bags' of embeddings, without instantiating the
- intermediate embeddings.
-
- For bags of constant length,
- * :func:`embedding_bag` with `mode=sum` is equivalent to :func:`nn.functional.embedding` followed by
- ``torch.sum(dim=1)``
- * with `mode=mean` is equivalent to :func:`nn.functional.embedding` followed by ``torch.mean(dim=1)``
- * with `mode=max` is equivalent to :func:`nn.functional.embedding` followed by ``torch.max(dim=1)``
-
- However, :func:`embedding_bag` is much more time and memory efficient than using a chain of these
- operations.
-
- Args:
- embedding_matrix: FloatTensor, where number of rows should correspond to the maximum possible index + 1,
- number of columns is the embedding size
- indices (N or BxN): LongTensor containing the indices of the embeddings to extract.
- When `input` is 1D Tensor of shape `N`, an `offsets` Tensor is given, that contains the
- starting position of each new sequence in the mini-batch.
- offsets (B or None): LongTensor containing the starting positions of each sample in a mini-batch of variable
- length sequences. If `input` is 2D (BxN), then offsets does not need to be given,
- as the `input` is treated as a mini-batch of fixed length sequences of length `N` each.
- max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this
- norm_type (float, optional): The p of the p-norm to compute for the max_norm option
- scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the frequency of
- the words in the dictionary.
- mode (string, optional): 'sum' | 'mean' | 'max'. Specifies the way to reduce the bag. Default: 'mean'
- sparse (boolean, optional): if ``True``, gradient w.r.t. weight matrix will be a sparse tensor. See Notes
- for more details regarding sparse gradients.
-
- Shape:
- - Embedding_matrix: FloatTensor `(V, embedding_dim)`,
- V = number of embeddings, embedding_dim = embedding size
- - Input: LongTensor `N`, N = number of embeddings to extract
- (or) LongTensor `BxN`, B = number of sequences in mini-batch,
- N = number of embeddings per sequence
- - Offsets: LongTensor `B`, B = number of bags. The values are the
- offsets in `input` for each bag, i.e. the cumsum of lengths.
- Offsets is not given if Input is 2D `BxN` Tensor,
- the input is considered to be of fixed-length sequences
- - Output: `(B, embedding_dim)`
-
- Examples::
-
- >>> # an Embedding module containing 10 tensors of size 3
- >>> embedding_matrix = torch.rand(10, 3)
- >>> # a batch of 2 samples of 4 indices each
- >>> input = torch.tensor([1,2,4,5,4,3,2,9])
- >>> offsets = torch.tensor([0,4])
- >>> F.embedding_bag(embedding_matrix, input, offsets)
- tensor([[ 0.3397, 0.3552, 0.5545],
- [ 0.5893, 0.4386, 0.5882]])
- """
- if indices.dim() == 2:
+ intermediate embeddings.
+
+ See :class:`torch.nn.EmbeddingBag` for more details.
+
+ Args:
+ input (LongTensor): Tensor containing bags of indices into the embedding matrix
+ weight (Tensor): The embedding matrix
+ Number of rows should correspond to the maximum possible index + 1,
+ number of columns is the embedding size
+ offsets (LongTensor, optional): Only used when :attr:`input` is 1D. :attr:`offsets` determines
+ the starting index position of each bag (sequence) in :attr:`input`.
+ max_norm (float, optional): If given, will renormalize the embedding vectors to have a norm lesser than
+ this before extracting. Note: this will modify :attr:`weight` in-place.
+ norm_type (float, optional): The ``p`` in the ``p``-norm to compute for the max_norm option. Default ``2``.
+ scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency of
+ the words in the mini-batch. Default ``False``.
+ Note: this option is not supported when ``mode="max"``.
+ mode (string, optional): ``"sum"``, ``"mean"`` or ``"max"``. Specifies the way to reduce the bag.
+ Default: ``"mean"``
+ sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` will be a sparse tensor. See Notes under
+ :class:`torch.nn.Embedding` for more details regarding sparse gradients.
+ Note: this option is not supported when ``mode="max"``.
+
+ Shape:
+
+ - :attr:`input` (LongTensor) and :attr:`offsets` (LongTensor, optional)
+
+ - If :attr:`input` is 2D of shape ``B x N``,
+
+ it will be treated as ``B`` bags (sequences) each of fixed length ``N``, and
+ this will return ``B`` values aggregated in a way depending on the :attr:`mode`.
+ :attr:`offsets` is ignored and required to be ``None`` in this case.
+
+ - If :attr:`input` is 1D of shape ``N``,
+
+ it will be treated as a concatenation of multiple bags (sequences).
+ :attr:`offsets` is required to be a 1D tensor containing the
+ starting index positions of each bag in :attr:`input`. Therefore,
+ for :attr:`offsets` of shape ``B``, :attr:`input` will be viewed as
+ having ``B`` bags. Empty bags (i.e., having 0-length) will have
+ returned vectors filled by zeros.
+
+ - :attr:`weight` (Tensor): the learnable weights of the module of
+ shape ``(num_embeddings x embedding_dim)``
+
+ - :attr:`output`: aggregated embedding values of shape ``B x embedding_dim``
+
+ Examples::
+
+ >>> # an Embedding module containing 10 tensors of size 3
+ >>> embedding_matrix = torch.rand(10, 3)
+ >>> # a batch of 2 samples of 4 indices each
+ >>> input = torch.tensor([1,2,4,5,4,3,2,9])
+ >>> offsets = torch.tensor([0,4])
+ >>> F.embedding_bag(embedding_matrix, input, offsets)
+ tensor([[ 0.3397, 0.3552, 0.5545],
+ [ 0.5893, 0.4386, 0.5882]])
+ """
+ # Check for backward compatibility.
+ # Used to be embedding_bag(weight, input, ...)
+ # Now is embedding_bag(input, weight, ...)
+ if weight.dtype == torch.long and input.is_floating_point():
+ warnings.warn("Argument order of nn.functional.embedding_bag was changed. "
+ "Usage `embedding_bag(weight, input, ...)` is deprecated, "
+ "and should now be `embedding_bag(input, weight, ...)`.")
+ weight, input = input, weight
+
+ if input.dim() == 2:
if offsets is not None:
raise ValueError("if input is 2D, then offsets has to be None"
", as input is treated is a mini-batch of"
" fixed length sequences. However, found "
"offsets of type {}".format(type(offsets)))
else:
- offsets = torch.arange(0, indices.numel(), indices.size(1),
- dtype=torch.long, device=indices.device)
+ offsets = torch.arange(0, input.numel(), input.size(1),
+ dtype=torch.long, device=input.device)
- indices = indices.view(-1)
- elif indices.dim() == 1:
+ input = input.view(-1)
+ elif input.dim() == 1:
if offsets is None:
raise ValueError("offsets has to be a 1D Tensor but got None")
if offsets.dim() != 1:
raise ValueError("offsets has to be a 1D Tensor")
- if offsets[0] != 0:
- raise ValueError("offsets[0] has to be 0, i.e. the first sequence"
- " in the mini-batch has to start from position 0."
- "However, got {}".format(offsets[0]))
- if offsets[-1] > indices.size(0):
- raise ValueError("offsets[-1] has to be smaller than indices's length"
+ if offsets[0].item() != 0:
+ raise ValueError("offsets[0] has to be 0, i.e., the first sequence "
+ "in the mini-batch has to start from position 0. "
+ "However, got {}".format(offsets[0].item()))
+ if offsets[-1].item() > input.size(0):
+ raise ValueError("offsets[-1] can not be greater than input's length"
" ({}), but got offsets[-1] of {}"
- .format(indices.size(0), offsets[-1]))
+ .format(input.size(0), offsets[-1].item()))
else:
raise ValueError("input has to be 1D or 2D Tensor,"
- " but got Tensor of dimension {}".format(indices.dim()))
+ " but got Tensor of dimension {}".format(input.dim()))
if mode == 'sum':
mode = 0
@@ -1181,8 +1201,8 @@ def embedding_bag(embedding_matrix, indices, offsets=None,
torch.embedding_renorm_(weight, input, max_norm, norm_type)
ret, _, _, _ = torch.embedding_bag(
- embedding_matrix,
- indices,
+ weight,
+ input,
offsets,
scale_grad_by_freq,
mode,
diff --git a/torch/nn/modules/sparse.py b/torch/nn/modules/sparse.py
--- a/torch/nn/modules/sparse.py
+++ b/torch/nn/modules/sparse.py
@@ -17,17 +17,19 @@ class Embedding(Module):
embedding_dim (int): the size of each embedding vector
padding_idx (int, optional): If given, pads the output with the embedding vector at :attr:`padding_idx`
(initialized to zeros) whenever it encounters the index.
- max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this
- norm_type (float, optional): The p of the p-norm to compute for the max_norm option
- scale_grad_by_freq (bool, optional): if given, this will scale gradients by the frequency of
- the words in the mini-batch.
- sparse (bool, optional): if ``True``, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for
- more details regarding sparse gradients.
+ max_norm (float, optional): If given, will renormalize the embedding vectors to have a norm lesser than
+ this before extracting.
+ norm_type (float, optional): The p of the p-norm to compute for the max_norm option. Default ``2``.
+ scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency of
+ the words in the mini-batch. Default ``False``.
+ sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor.
+ See Notes for more details regarding sparse gradients.
Attributes:
weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
Shape:
+
- Input: LongTensor of arbitrary shape containing the indices to extract
- Output: `(*, embedding_dim)`, where `*` is the input shape
@@ -160,53 +162,51 @@ class EmbeddingBag(Module):
r"""Computes sums or means of 'bags' of embeddings, without instantiating the
intermediate embeddings.
- For bags of constant length,
- * nn.EmbeddingBag with `mode=sum` is equivalent to nn.Embedding followed by `torch.sum(dim=1)`
- * with `mode=mean` is equivalent to nn.Embedding followed by `torch.mean(dim=1)`
- * with `mode=max` is equivalent to nn.Embedding followed by `torch.max(dim=1)`
+ For bags of constant length, this class
+
+ * with ``mode="sum"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.sum(dim=1)``,
+ * with ``mode="mean"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.mean(dim=1)``,
+ * with ``mode="max"`` is equivalent to :class:`~torch.nn.Embedding` followed by ``torch.max(dim=1)``.
- However, nn.EmbeddingBag is much more time and memory efficient than using a chain of these
+ However, :class:`~torch.nn.EmbeddingBag` is much more time and memory efficient than using a chain of these
operations.
Args:
num_embeddings (int): size of the dictionary of embeddings
embedding_dim (int): the size of each embedding vector
- max_norm (float, optional): If given, will renormalize the embeddings to always have a norm lesser than this
- norm_type (float, optional): The p of the p-norm to compute for the max_norm option
- scale_grad_by_freq (bool, optional): if given, this will scale gradients by the frequency of
- the words in the dictionary. Note: this option is not supported when
- using max mode.
- mode (string, optional): 'sum' | 'mean' | 'max'. Specifies the way to reduce the bag. Default: 'mean'
- sparse (bool, optional): if ``True``, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for
- more details regarding sparse gradients. Note: this option is not supported when
- using max mode.
+ max_norm (float, optional): If given, will renormalize the embedding vectors to have a norm lesser than
+ this before extracting.
+ norm_type (float, optional): The p of the p-norm to compute for the max_norm option. Default ``2``.
+ scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency of
+ the words in the mini-batch. Default ``False``.
+ Note: this option is not supported when ``mode="max"``.
+ mode (string, optional): ``"sum"``, ``"mean"`` or ``"max"``. Specifies the way to reduce the bag.
+ Default: ``"mean"``
+ sparse (bool, optional): if ``True``, gradient w.r.t. :attr:`weight` matrix will be a sparse tensor. See
+ Notes for more details regarding sparse gradients. Note: this option is not
+ supported when ``mode="max"``.
Attributes:
- weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
+ weight (Tensor): the learnable weights of the module of shape ``(num_embeddings x embedding_dim)``
- Inputs: input, offsets
- - **input** (``N`` or ``B x N``): LongTensor containing the indices of the embeddings
- to extract. When `input` is 1D Tensor of shape `N`,
- an `offsets` Tensor is given, that contains the
- starting position of each new sequence in the
- mini-batch.
- - **offsets** (``B`` or ``None``): LongTensor containing the starting positions of
- each sample in a mini-batch of variable length
- sequences. If `input` is 2D (``B x N``), then offsets
- does not need to be given, as the `input` is
- treated as a mini-batch of fixed length sequences
- of length `N` each.
+ Inputs: :attr:`input` (LongTensor) and :attr:`offsets` (LongTensor, optional)
+ - If :attr:`input` is 2D of shape ``B x N``,
- Shape:
- - Input: LongTensor `N`, N = number of embeddings to extract
- (or) LongTensor ``B x N``, B = number of sequences in mini-batch,
- N = number of embeddings per sequence
- - Offsets: LongTensor `B`, B = number of bags. The values are the
- offsets in `input` for each bag, i.e. the cumsum of lengths.
- Offsets is not given if Input is 2D ``B x N`` Tensor,
- the input is considered to be of fixed-length sequences
- - Output: `(B, embedding_dim)`
+ it will be treated as ``B`` bags (sequences) each of fixed length ``N``, and
+ this will return ``B`` values aggregated in a way depending on the :attr:`mode`.
+ :attr:`offsets` is ignored and required to be ``None`` in this case.
+
+ - If :attr:`input` is 1D of shape ``N``,
+
+ it will be treated as a concatenation of multiple bags (sequences).
+ :attr:`offsets` is required to be a 1D tensor containing the
+ starting index positions of each bag in :attr:`input`. Therefore,
+ for :attr:`offsets` of shape ``B``, :attr:`input` will be viewed as
+ having ``B`` bags. Empty bags (i.e., having 0-length) will have
+ returned vectors filled by zeros.
+
+ Output shape: ``B x embedding_dim``
Examples::
@@ -239,7 +239,7 @@ def reset_parameters(self):
self.weight.data.normal_(0, 1)
def forward(self, input, offsets=None):
- return F.embedding_bag(self.weight, input, offsets,
+ return F.embedding_bag(input, self.weight, offsets,
self.max_norm, self.norm_type,
self.scale_grad_by_freq, self.mode, self.sparse)
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -1626,9 +1626,9 @@ def _test_EmbeddingBag(self, cuda, mode, sparse, dtype=torch.double):
self.assertEqual(es_weight_grad, expected_grad_weight, dtype2prec[dtype])
# now compare EmbeddingBag vs Embedding + Sum/Mean, for constant bag length
- def _test_vs_Embedding(N, D, B, L):
- es = nn.EmbeddingBag(N, D, mode=mode, sparse=sparse).to(device, dtype)
- e = nn.Embedding(N, D).to(device, dtype)
+ def _test_vs_Embedding(N, D, B, L, max_norm=None):
+ es = nn.EmbeddingBag(N, D, mode=mode, sparse=sparse, max_norm=max_norm).to(device, dtype)
+ e = nn.Embedding(N, D, max_norm=max_norm).to(device, dtype)
e.weight.data.copy_(es.weight.data)
input = torch.randint(N, (B, L), device=device, dtype=torch.long)
offsets = torch.arange(0, B, device=device, dtype=torch.long).mul_(L)
@@ -1656,8 +1656,9 @@ def _test_vs_Embedding(N, D, B, L):
N, D, B, L = random.randint(1, 100), random.randint(1, 100), random.randint(1, 50), random.randint(1, 50)
_test_vs_Embedding(N, D, B, L)
- for p in itertools.product([1, 2], repeat=4):
- _test_vs_Embedding(*p)
+ for max_norm in (None, 3):
+ for p in itertools.product([1, 2], repeat=4):
+ _test_vs_Embedding(*p, max_norm=max_norm)
# check that giving illegal input combos raises error
es = nn.EmbeddingBag(10, 20, mode=mode, sparse=sparse)
@@ -6758,27 +6759,27 @@ def multimarginloss_weights_no_reduce_test():
dict(
module_name='Embedding',
constructor_args=(4, 3),
- input_fn=lambda: Variable(torch.randperm(2).repeat(1, 2)),
+ input_fn=lambda: torch.randperm(2).repeat(1, 2),
jacobian_input=False,
check_gradgrad=False,
),
dict(
module_name='EmbeddingBag',
constructor_args=(4, 3),
- input_fn=lambda: Variable(torch.randperm(2).repeat(1, 2)),
+ input_fn=lambda:torch.randperm(2).repeat(1, 2),
jacobian_input=False,
check_gradgrad=False,
),
dict(
fullname='EmbeddingBag_sparse',
constructor=lambda: nn.EmbeddingBag(4, 3, sparse=True),
- input_fn=lambda: Variable(torch.randperm(2).repeat(1, 2)),
+ input_fn=lambda: torch.randperm(2).repeat(1, 2),
jacobian_input=False,
check_gradgrad=False,
),
dict(
constructor=lambda: nn.Embedding(4, 3, sparse=True),
- input_fn=lambda: Variable(torch.randperm(2).repeat(1, 2)),
+ input_fn=lambda: torch.randperm(2).repeat(1, 2),
jacobian_input=False,
fullname='Embedding_sparse',
check_gradgrad=False,
| EmbeddingBag max_norm parameter does not work
When using the `max_norm` parameter, the following error occurs:
```
/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in embedding_bag(embedding_matrix, indices, offsets, max_norm, norm_type, scale_grad_by_freq, mode, sparse)
1167 if max_norm is not None:
1168 with torch.no_grad():
-> 1169 torch.embedding_renorm_(weight, input, max_norm, norm_type)
1170
1171 ret, _, _ = torch.embedding_bag(
NameError: name 'weight' is not defined
```
Pytorch version 0.4.0
| @nehz what PyTorch version are you using? can you please use the issue template to help us
I can reproduce this on master, indeed looks like a copy-paste bug. | 2018-05-30T19:46:05 |
pytorch/pytorch | 7,973 | pytorch__pytorch-7973 | [
"7956"
] | 215abffe609d0f151359101b24377a803b5b396e | diff --git a/torch/nn/parallel/_functions.py b/torch/nn/parallel/_functions.py
--- a/torch/nn/parallel/_functions.py
+++ b/torch/nn/parallel/_functions.py
@@ -1,3 +1,5 @@
+import warnings
+
import torch
import torch.cuda.comm as comm
from torch.autograd import Function
@@ -51,12 +53,23 @@ def forward(ctx, target_device, dim, *inputs):
ctx.target_device = target_device
ctx.dim = dim
ctx.input_gpus = tuple(map(lambda i: i.get_device(), inputs))
+ if all(t.dim() == 0 for t in inputs) and dim == 0:
+ inputs = tuple(t.view(1) for t in inputs)
+ warnings.warn('Was asked to gather along dimension 0, but all '
+ 'input tensors were scalars; will instead unsqueeze '
+ 'and return a vector.')
+ ctx.unsqueezed_scalar = True
+ else:
+ ctx.unsqueezed_scalar = False
ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs))
return comm.gather(inputs, ctx.dim, ctx.target_device)
@staticmethod
def backward(ctx, grad_output):
- return (None, None) + Scatter.apply(ctx.input_gpus, ctx.input_sizes, ctx.dim, grad_output)
+ scattered_grads = Scatter.apply(ctx.input_gpus, ctx.input_sizes, ctx.dim, grad_output)
+ if ctx.unsqueezed_scalar:
+ scattered_grads = tuple(g[0] for g in scattered_grads)
+ return (None, None) + scattered_grads
class Scatter(Function):
diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py
--- a/torch/nn/parallel/data_parallel.py
+++ b/torch/nn/parallel/data_parallel.py
@@ -61,6 +61,12 @@ class DataParallel(Module):
that each such hook be executed before the corresponding
:meth:`~torch.nn.Module.forward` call of that device.
+ .. warning::
+ When :attr:`module` returns a scalar (i.e., 0-dimensional tensor) in
+ :func:`forward`, this wrapper will return a vector of length equal to
+ number of devices used in data parallelism, containing the result from
+ each device.
+
.. note::
There is a subtlety in using the
``pack sequence -> recurrent network -> unpack sequence`` pattern in a
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -2287,8 +2287,8 @@ def test_scatter_gpu(self):
def _test_gather(self, output_device):
inputs = (
- Variable(torch.randn(2, 4).cuda(0), requires_grad=True),
- Variable(torch.randn(2, 4).cuda(1), requires_grad=True)
+ torch.randn(2, 4, device='cuda:0', requires_grad=True),
+ torch.randn(2, 4, device='cuda:1', requires_grad=True),
)
result = dp.gather(inputs, output_device)
self.assertEqual(result.size(), torch.Size([4, 4]))
@@ -2306,6 +2306,27 @@ def _test_gather(self, output_device):
self.assertEqual(inputs[1].grad.data, grad[2:])
_assertGradAndGradgradChecks(self, lambda x, y: dp.gather((x, y), output_device), inputs)
+ # test scalar inputs, should stack into a vector in this case
+ inputs = (
+ torch.randn((), device='cuda:0', requires_grad=True),
+ torch.randn((), device='cuda:1', requires_grad=True),
+ )
+ result = dp.gather(inputs, output_device)
+ self.assertEqual(result.size(), torch.Size([2]))
+ self.assertEqual(result[0], inputs[0])
+ self.assertEqual(result[1], inputs[1])
+ if output_device != -1:
+ self.assertEqual(result.get_device(), output_device)
+ else:
+ self.assertFalse(result.is_cuda)
+ grad = torch.randn(2)
+ if output_device != -1:
+ grad = grad.cuda(output_device)
+ result.backward(grad)
+ self.assertEqual(inputs[0].grad, grad[0])
+ self.assertEqual(inputs[1].grad, grad[1])
+ _assertGradAndGradgradChecks(self, lambda x, y: dp.gather((x, y), output_device), inputs)
+
@unittest.skipIf(not TEST_MULTIGPU, "multi-GPU not supported")
def test_gather_cpu(self):
self._test_gather(-1)
| [Bug Report] DataParallel can't handle scalar output (PyTorch 0.4.0)
Seems like `torch/nn/parallel/scatter_gather.py > Gather.apply(...)` is broken by dim=0 outputs.
``` python
>>> import torch
>>> torch.__version__
'0.4.0'
>>> class Foo(torch.nn.Module):
... def forward(self, x):
... return x.mean() # this gives a scalar output
... # return x.mean().view(1) # this is a quick fix
...
>>> foo = torch.nn.DataParallel(Foo(),[0,1]).cuda()
>>> x = torch.zeros(2,2)
>>> foo(x)
```
```
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 115, in forward
return self.gather(outputs, self.output_device)
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 127, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
return gather_map(outputs)
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 54, in forward
ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs))
File "/home/?????/miniconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 54, in <lambda>
ctx.input_sizes = tuple(map(lambda i: i.size(ctx.dim), inputs))
RuntimeError: dimension specified as 0 but tensor has no dimensions
```
Related: https://github.com/pytorch/pytorch/issues/7568
| might also be fixed by #7934 , i'll check later
not fixed. working on a fix. | 2018-05-30T23:27:09 |
pytorch/pytorch | 8,116 | pytorch__pytorch-8116 | [
"7993"
] | e5b997223ccbc50373e3c53f6bfe58fe9d4efc06 | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1558,6 +1558,13 @@ def callable(a, b) -> number
Unlike :meth:`~Tensor.expand`, this function copies the tensor's data.
+.. warning::
+
+ :func:`torch.repeat` behaves differently from
+ `numpy.repeat <https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html>`_,
+ but is more similar to
+ `numpy.tile <https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html>`_.
+
Args:
sizes (torch.Size or int...): The number of times to repeat this tensor along each
dimension
| np.repeat vs torch.repeat
## Issue description
Numpy repeat and torch repeat have fundamentally different default behaviors. This was unexpected to me. This may be unexpected to other people. For me, this failed silently in my model, welp.
numpy.repeat [1, 2, 3] β [1, 1, 2, 2, 3, 3]
torch.repeat [1, 2, 3] β [1, 2, 3, 1, 2, 3]
I think it'd be nice to either change the default behavior or to add to the documentation a warning.
| Easiest thing to do for now is to add a warning to the docs. One day we'll probably re-visit the all the differences between the Numpy API and pytorch API, but for now, just changing repeat might be disruptive to our users.
Feel free to submit a pull request @PetrochukM (otherwise, I can submit one). | 2018-06-04T16:07:00 |
|
pytorch/pytorch | 8,117 | pytorch__pytorch-8117 | [
"4154"
] | 96a77b5aa80a3d7512e7916c28af469eae353962 | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -777,13 +777,16 @@ def rrelu(input, lower=1. / 8, upper=1. / 3, training=False, inplace=False):
See :class:`~torch.nn.LogSigmoid` for more details.
""")
-hardshrink = _add_docstr(torch._C._nn.hardshrink, r"""
-hardshrink(input, lambd=0.5) -> Tensor
-Applies the hard shrinkage function element-wise
+def hardshrink(input, lambd=0.5):
+ r"""
+ hardshrink(input, lambd=0.5) -> Tensor
-See :class:`~torch.nn.Hardshrink` for more details.
-""")
+ Applies the hard shrinkage function element-wise
+
+ See :class:`~torch.nn.Hardshrink` for more details.
+ """
+ return torch.hardshrink(input, lambd)
def tanhshrink(input):
| diff --git a/test/test_legacy_nn.py b/test/test_legacy_nn.py
--- a/test/test_legacy_nn.py
+++ b/test/test_legacy_nn.py
@@ -618,6 +618,10 @@ def add_test(test):
test_params = deepcopy(test_params)
name = test_params.pop('module_name')
name = name_remap.get(name, name)
+ # hardshrink is deprecated in nn
+ if name == "HardShrink":
+ continue
+
test_params['constructor'] = getattr(nn, name)
test = OldModuleTest(**test_params)
add_test(test)
@@ -625,6 +629,9 @@ def add_test(test):
test_params = deepcopy(test_params)
name = test_params.pop('module_name')
name = name_remap.get(name, name.replace('Loss', 'Criterion'))
+ # hardshrink is deprecated in nn
+ if name == "HardShrink":
+ continue
# nn.NLLLoss2d is deprecated, but there is a NLLLoss test for 2d
if name == 'ClassNLLCriterion' and 'desc' in test_params.keys() and '2d' in test_params['desc']:
diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -304,8 +304,7 @@ def _do_test(self, test_case, module, input):
for p in module.parameters():
test_case.assertIsInstance(p, torch.DoubleTensor)
- # TODO: Hardshrink is lacking a CUDA implementation
- if TEST_CUDA and self.should_test_cuda and type(module) != nn.Hardshrink:
+ if TEST_CUDA and self.should_test_cuda:
# check that cuda() moves module parameters to correct GPU device,
# and that float() casts parameters correctly
@@ -363,7 +362,7 @@ def _do_test(self, test_case, module, input):
input = input.half().cuda()
module.half().cuda()
module(input)
- for o in module.parameters():
+ for p in module.parameters():
test_case.assertIsInstance(p, torch.cuda.HalfTensor)
test_case.assertEqual(p.get_device(), 0)
@@ -5523,19 +5522,17 @@ def add(test_name, fn):
test_name = test.get_name()
add(test_name, lambda self, test=test: test(self))
- # Hardshrink is not implemented in CUDA, so we must not test it.
- if not test_name.startswith("test_Hardshrink"):
- cuda_test_name = test_name + '_cuda'
- # With dtype enable, it's good enough to test against three floating types
- if 'dtype' in get_function_arglist(test.test_cuda):
- add(cuda_test_name + '_float', lambda self,
- test=test: test.test_cuda(self, dtype=torch.float))
- add(cuda_test_name + '_double', lambda self,
- test=test: test.test_cuda(self, dtype=torch.double))
- add(cuda_test_name + '_half', lambda self,
- test=test: test.test_cuda(self, dtype=torch.half))
- else:
- add(cuda_test_name, lambda self, test=test: test.test_cuda(self))
+ cuda_test_name = test_name + '_cuda'
+ # With dtype enable, it's good enough to test against three floating types
+ if 'dtype' in get_function_arglist(test.test_cuda):
+ add(cuda_test_name + '_float', lambda self,
+ test=test: test.test_cuda(self, dtype=torch.float))
+ add(cuda_test_name + '_double', lambda self,
+ test=test: test.test_cuda(self, dtype=torch.double))
+ add(cuda_test_name + '_half', lambda self,
+ test=test: test.test_cuda(self, dtype=torch.half))
+ else:
+ add(cuda_test_name, lambda self, test=test: test.test_cuda(self))
def wrap_functional(fn, **kwargs):
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -5547,6 +5547,24 @@ def test_abs(self):
res = torch.LongTensor((-bignumber,))
self.assertGreater(res.abs()[0], 0)
+ def test_hardshrink(self):
+ data_original = torch.tensor([1, 0.5, 0.3, 0.6]).view(2, 2)
+ float_types = [
+ 'torch.DoubleTensor',
+ 'torch.FloatTensor'
+ ]
+ for t in float_types:
+ data = data_original.type(t)
+ self.assertEqual(torch.tensor([1, 0.5, 0, 0.6]).view(2, 2), torch.nn.Hardshrink(0.3)(data))
+ self.assertEqual(torch.tensor([1, 0, 0, 0.6]).view(2, 2), torch.nn.Hardshrink(0.5)(data))
+ self.assertEqual(torch.tensor([1, 0, 0, 0.6]).view(2, 2), torch.nn.Hardshrink()(data))
+
+ # test non-contiguous case
+ self.assertEqual(torch.tensor([1, 0.3, 0.5, 0.6]).view(2, 2), torch.nn.Hardshrink(0.1)(data.t()))
+
+ # not supporting default lambd value for torch.hardshrink() due to a Scalar bug
+ self.assertRaises(TypeError, lambda: data.hardshrink())
+
def test_unbiased(self):
tensor = torch.randn(100)
self.assertEqual(tensor.var(0), tensor.var(0, unbiased=True))
diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -491,9 +491,13 @@ def init(cls):
long_size = 8 if sys.platform == 'win32' else None
tests = load_lua(path, long_size=long_size)
for name, test in tests['modules'].items():
+ if name == "HardShrink":
+ continue
test_name = 'test_' + name.replace('nn.', '')
setattr(cls, test_name, cls._module_test(name, test))
for name, test in tests['criterions'].items():
+ if name == "HardShrink":
+ continue
test_name = 'test_' + name.replace('nn.', '')
setattr(cls, test_name, cls._criterion_test(name, test))
| Implement CUDA Hardshrink
Request from https://discuss.pytorch.org/t/hardshrink-doesnt-support-cuda-floattensor/11088.
| 2018-06-04T16:33:52 |
|
pytorch/pytorch | 8,155 | pytorch__pytorch-8155 | [
"7222"
] | c4462695682a03deb2fe53457ae1110e4539793d | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -115,6 +115,7 @@
# documentation.
#
html_theme_options = {
+ 'canonical_url': 'https://pytorch.org/docs/stable/',
'collapse_navigation': False,
'display_version': True,
'logo_only': True,
| [Docs] [Low Priority] Pytorch docs in Google search points to master/unstable
When I googled `pytorch nn functional`, this came up as the first link: `pytorch.org/docs/master/_modules/torch/nn/functional.html`
This is, as you know, docs for the master/unstable version. On clicking the link which says it'll take me to the docs for the stable release, I get sent to the home page of the docs, and now I have to search for the page again.
This can be solved by adding a canonical link to their non-stable documentation, i.e. something like this: `<link rel="canonical" href="https://pytorch.org/docs/stable/nn.html#torch-nn-functional">`.
| I've noticed this before but I didn't know there was a way to solve it. Thanks @sherjilozair for the idea; I'll try it out and see how it goes -- this would definitely be a great improvement. (You should also feel free to submit a PR if you'd like!)
This is what Python documentation uses. See, eg.: view-source:https://docs.python.org/3.8/library/functions.html
It would take time for Google's SEO to update though. | 2018-06-05T07:34:25 |
|
pytorch/pytorch | 8,166 | pytorch__pytorch-8166 | [
"6090"
] | bf58bb5e59fa64fb49d77467f3466c6bc0cc76c5 | diff --git a/torch/autograd/__init__.py b/torch/autograd/__init__.py
--- a/torch/autograd/__init__.py
+++ b/torch/autograd/__init__.py
@@ -9,7 +9,7 @@
from .variable import Variable
from .function import Function, NestedIOFunction
-from .gradcheck import gradcheck
+from .gradcheck import gradcheck, gradgradcheck
from .grad_mode import no_grad, enable_grad, set_grad_enabled
from . import profiler
diff --git a/torch/autograd/gradcheck.py b/torch/autograd/gradcheck.py
--- a/torch/autograd/gradcheck.py
+++ b/torch/autograd/gradcheck.py
@@ -125,25 +125,36 @@ def _differentiable_outputs(x):
def gradcheck(func, inputs, eps=1e-6, atol=1e-5, rtol=1e-3, raise_exception=True):
- """Check gradients computed via small finite differences
- against analytical gradients
+ r"""Check gradients computed via small finite differences against analytical
+ gradients
- The check between numerical and analytical has the same behaviour as
- numpy.allclose https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html
- meaning it check that
- absolute(a - n) <= (atol + rtol * absolute(n))
- is true for all elements of analytical jacobian a and numerical jacobian n.
+ The check between numerical and analytical gradients has the same behaviour as
+ `numpy.allclose <https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html>`_,
+ i.e., it checks that
+
+ .. math::
+
+ \lvert a - n \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert n \rvert
+
+ holds for all elements of analytical gradient :math:`a` and numerical
+ gradient :math:`n`.
+
+ .. note::
+ The default values are designed for :attr:`input` of double precision.
+ This check will likely fail if :attr:`input` is of single precision,
+ i.e., ``FloatTensor``.
Args:
- func: Python function that takes Tensor inputs and returns
+ func (function): a Python function that takes Tensor inputs and returns
a Tensor or a tuple of Tensors
- inputs: tuple of Tensors
- eps: perturbation for finite differences
- atol: absolute tolerance
- rtol: relative tolerance
- raise_exception: bool indicating whether to raise an exception if
- gradcheck fails. The exception gives more information about the
+ inputs (tuple of Tensor): inputs to the function
+ eps (float, optional): perturbation for finite differences
+ atol (float, optional): absolute tolerance
+ rtol (float, optional): relative tolerance
+ raise_exception (bool, optional): indicating whether to raise an exception if
+ the check fails. The exception gives more information about the
exact nature of the failure. This is helpful when debugging gradchecks.
+
Returns:
True if all differences satisfy allclose condition
"""
@@ -208,19 +219,30 @@ def fn(input):
def gradgradcheck(func, inputs, grad_outputs=None, eps=1e-6, atol=1e-5, rtol=1e-3,
gen_non_contig_grad_outputs=False, raise_exception=True):
- """Check gradients of gradients computed via small finite differences
- against analytical gradients
+ r"""Check gradients of gradients computed via small finite differences
+ against analytical gradients
+
This function checks that backpropagating through the gradients computed
- to the given grad_outputs are correct.
+ to the given :attr:`grad_outputs` are correct.
+
+ The check between numerical and analytical gradients has the same behaviour as
+ `numpy.allclose <https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html>`_,
+ i.e., it checks that
+
+ .. math::
- The check between numerical and analytical has the same behaviour as
- numpy.allclose https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html
- meaning it check that
- absolute(a - n) <= (atol + rtol * absolute(n))
- is true for all elements of analytical gradient a and numerical gradient n.
+ \lvert a - n \rvert \leq \texttt{atol} + \texttt{rtol} \times \lvert n \rvert
+
+ holds for all elements of analytical gradient :math:`a` and numerical
+ gradient :math:`n`.
+
+ .. note::
+ The default values are designed for :attr:`input` of double precision.
+ This check will likely fail if :attr:`input` is of single precision,
+ i.e., ``FloatTensor``.
Args:
- func (function): Python function that takes Tensor inputs and returns
+ func (function): a Python function that takes Tensor inputs and returns
a Tensor or a tuple of Tensors
inputs (tuple of Tensor): inputs to the function
grad_outputs (tuple of Tensor, optional): The gradients with respect to
@@ -231,13 +253,12 @@ def gradgradcheck(func, inputs, grad_outputs=None, eps=1e-6, atol=1e-5, rtol=1e-
gen_non_contig_grad_outputs (bool, optional): if :attr:`grad_outputs` is
``None`` and :attr:`gen_non_contig_grad_outputs` is ``True``, the
randomly generated gradient outputs are made to be noncontiguous
- raise_exception: bool indicating whether to raise an exception if
- gradcheck fails. The exception gives more information about the
+ raise_exception (bool, optional): indicating whether to raise an exception if
+ the check fails. The exception gives more information about the
exact nature of the failure. This is helpful when debugging gradchecks.
Returns:
- True if all differences satisfy allclose condition. Raises an exception
- otherwise.
+ True if all differences satisfy allclose condition
"""
if grad_outputs is None:
# If grad_outputs is not specified, create random Tensors of the same
| gradcheck and gradgradcheck are not well documented
Cannot find the docs on website
| well apparently gradgradcheck isn't exposed.
`gradcheck` is only mentionned in the extending section [here](https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd).
And `gradgradcheck` is only exposed via `torch.autograd.gradcheck.gradgradcheck`.
| 2018-06-05T18:13:39 |
|
pytorch/pytorch | 8,403 | pytorch__pytorch-8403 | [
"6477"
] | fcd9af8a257f83450ba82cccd227fe43bde7c879 | diff --git a/aten/src/ATen/function_wrapper.py b/aten/src/ATen/function_wrapper.py
--- a/aten/src/ATen/function_wrapper.py
+++ b/aten/src/ATen/function_wrapper.py
@@ -286,7 +286,7 @@ def __init__(self, reason):
CONSTANT_REPLACEMENTS = [
('AS_REAL', '${AS_REAL}'),
('__storage_size.get\\(\\)',
- 'THLongStorageView(static_cast<int64_t>(storage.size()), THLongStorageViewKind::LENGTH)'),
+ 'THLongStorageView(static_cast<int64_t>(source.size()), THLongStorageViewKind::LENGTH)'),
('__last_dim', 'self.ndimension()-1'),
]
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -5805,6 +5805,19 @@ def test_tensor_set(self):
self.assertEqual(t1.size(), size)
self.assertEqual(t1.stride(), stride)
+ # test argument names
+ t1 = torch.Tensor()
+ # 1. case when source is tensor
+ t1.set_(source=t2)
+ self.assertEqual(t1.storage()._cdata, t2.storage()._cdata)
+ # 2. case when source is storage
+ t1.set_(source=t2.storage())
+ self.assertEqual(t1.storage()._cdata, t2.storage()._cdata)
+ # 3. case when source is storage, and other args also specified
+ t1.set_(source=t2.storage(), storage_offset=0, size=size, stride=stride)
+ self.assertEqual(t1.size(), size)
+ self.assertEqual(t1.stride(), stride)
+
def test_equal(self):
# Contiguous, 1D
t1 = torch.Tensor((3, 4, 9, 10))
| [docs] Tensor.set_ (arguments)
Python documentation or the interface is lacking here:
`Tensor.set_(source=None, storage_offset=0, size=None, stride=None)`
I was only able to call it when the first argument is positional and is a `Storage` and all other arguments are given as either positional or keyword but no defaults.
pytorch version: 0.3.1
| `.set_()` works on master
```
>>> x.set_()
[torch.FloatTensor of size (0,)]
```
Hmm can you be clearer on what you expect to see? All of the following works for me on master:
```python
>>> x = torch.randn(0)
>>> x
[torch.FloatTensor of size (0,)]
>>> y = torch.randn(3)
>>> y
0.5596
-1.3084
0.0580
[torch.FloatTensor of size (3,)]
>>> x.set_(y.storage(), y.storage_offset(), y.size(), y.stride())
0.5596
-1.3084
0.0580
[torch.FloatTensor of size (3,)]
>>> x
0.5596
-1.3084
0.0580
[torch.FloatTensor of size (3,)]
>>> x.set_(y.storage(), y.storage_offset(), y.size(), stride=y.stride())
0.5596
-1.3084
0.0580
[torch.FloatTensor of size (3,)]
>>> x.set_(y.storage(), y.storage_offset(), y.size())
0.5596
-1.3084
0.0580
[torch.FloatTensor of size (3,)]
>>> x.set_(y.storage(), 1, [2])
-1.3084
0.0580
[torch.FloatTensor of size (2,)]
>>> x.set_(y.storage(), size=[2], storage_offset=1)
-1.3084
0.0580
[torch.FloatTensor of size (2,)]
```
I've checked all the above cases work for me on 0.3.1.
What doesn't work on 0.3.1 or is not conforming to the documented interface `set_(source=None, storage_offset=0, size=None, stride=None) β Tensor`:
`x.set_(y, size=[2], storage_offset=1)`
(the doc says source can be Tensor or Storage)
TypeError: set_ received an invalid combination of arguments - got (torch.FloatTensor, storage_offset=int, size=list)
`x.set_(source = y.storage(), size=[2], storage_offset=1)`
TypeError: set_ received an invalid combination of arguments - got (storage_offset=int, size=list, source=torch.FloatStorage, )
`x.set_(None, size=[2], storage_offset=0)`
TypeError: set_ received an invalid combination of arguments - got (NoneType, storage_offset=int, size=list)
`x.set_(y.storage(), size=[2]) `
(expecting the default storage_offset=0)
TypeError...
With a bit more digging I can see that the following works
`x.set_(storage=y.storage())`
`x.set_(source=y)`
`x.set_(sourceStorage=y.storage(), size=[2], storage_offset=0)`
but is not matching to the documented python interface. | 2018-06-12T22:16:20 |
pytorch/pytorch | 8,427 | pytorch__pytorch-8427 | [
"8420",
"8420"
] | 3cb45bafc8b9b023049e5f979a2bcb75e3f7009d | diff --git a/torch/nn/modules/rnn.py b/torch/nn/modules/rnn.py
--- a/torch/nn/modules/rnn.py
+++ b/torch/nn/modules/rnn.py
@@ -603,8 +603,10 @@ def reset_parameters(self):
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
- def forward(self, input, hx):
+ def forward(self, input, hx=None):
self.check_forward_input(input)
+ if hx is None:
+ hx = input.new_zeros(input.size(0), self.hidden_size, requires_grad=False)
self.check_forward_hidden(input, hx)
if self.nonlinearity == "tanh":
func = self._backend.RNNTanhCell
@@ -698,8 +700,11 @@ def reset_parameters(self):
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
- def forward(self, input, hx):
+ def forward(self, input, hx=None):
self.check_forward_input(input)
+ if hx is None:
+ hx = input.new_zeros(input.size(0), self.hidden_size, requires_grad=False)
+ hx = (hx, hx)
self.check_forward_hidden(input, hx[0], '[0]')
self.check_forward_hidden(input, hx[1], '[1]')
return self._backend.LSTMCell(
@@ -778,8 +783,10 @@ def reset_parameters(self):
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
- def forward(self, input, hx):
+ def forward(self, input, hx=None):
self.check_forward_input(input)
+ if hx is None:
+ hx = input.new_zeros(input.size(0), self.hidden_size, requires_grad=False)
self.check_forward_hidden(input, hx)
return self._backend.GRUCell(
input, hx,
| Error in forward() docs for RNNCell
I think there is a mistake in the docs about the `RNNCell`. It says that the hidden argument in `forward` function may be omitted and will then be defaulted to `zeros` but this raises an error.
The same issue will happen for `LSTMCell` and `GRUCell`
```
rnn = nn.RNNCell(10, 20)
input = torch.randn(3,10)
h = rnn(input)
```
leads to
`TypeError: forward() missing 1 required positional argument: 'hx'`
Error in forward() docs for RNNCell
I think there is a mistake in the docs about the `RNNCell`. It says that the hidden argument in `forward` function may be omitted and will then be defaulted to `zeros` but this raises an error.
The same issue will happen for `LSTMCell` and `GRUCell`
```
rnn = nn.RNNCell(10, 20)
input = torch.randn(3,10)
h = rnn(input)
```
leads to
`TypeError: forward() missing 1 required positional argument: 'hx'`
| Thanks for pointing that out, @valsworthen. We'll either fix the docs or add that functionality. It seems useful for the hidden state to default to 0 in my opinion so I'm leaning towards a solution involving adding the functionality, especially if it used to exist.
I agree with that second option, thanks for your answer!
Thanks for pointing that out, @valsworthen. We'll either fix the docs or add that functionality. It seems useful for the hidden state to default to 0 in my opinion so I'm leaning towards a solution involving adding the functionality, especially if it used to exist.
I agree with that second option, thanks for your answer! | 2018-06-13T15:00:57 |
|
pytorch/pytorch | 8,460 | pytorch__pytorch-8460 | [
"7186"
] | dc209ed963afa10950c9f7ab7d73934683043f92 | diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py
--- a/torch/nn/modules/loss.py
+++ b/torch/nn/modules/loss.py
@@ -7,12 +7,6 @@
from .. import functional as F
-def _assert_no_grad(tensor):
- assert not tensor.requires_grad, \
- "nn criterions don't compute the gradient w.r.t. targets - please " \
- "mark these tensors as not requiring gradients"
-
-
class _Loss(Module):
def __init__(self, size_average=True, reduce=True):
super(_Loss, self).__init__()
@@ -81,7 +75,6 @@ def __init__(self, size_average=True, reduce=True):
super(L1Loss, self).__init__(size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.l1_loss(input, target, size_average=self.size_average,
reduce=self.reduce)
@@ -189,7 +182,6 @@ def __init__(self, weight=None, size_average=True, ignore_index=-100, reduce=Tru
self.ignore_index = ignore_index
def forward(self, input, target):
- _assert_no_grad(target)
return F.nll_loss(input, target, self.weight, self.size_average,
self.ignore_index, self.reduce)
@@ -253,7 +245,6 @@ def __init__(self, log_input=True, full=False, size_average=True, eps=1e-8, redu
self.eps = eps
def forward(self, log_input, target):
- _assert_no_grad(target)
return F.poisson_nll_loss(log_input, target, self.log_input, self.full,
self.size_average, self.eps, self.reduce)
@@ -336,7 +327,6 @@ def __init__(self, size_average=True, reduce=True):
super(KLDivLoss, self).__init__(size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.kl_div(input, target, size_average=self.size_average, reduce=self.reduce)
@@ -394,7 +384,6 @@ def __init__(self, size_average=True, reduce=True):
super(MSELoss, self).__init__(size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.mse_loss(input, target, size_average=self.size_average, reduce=self.reduce)
@@ -454,7 +443,6 @@ def __init__(self, weight=None, size_average=True, reduce=True):
super(BCELoss, self).__init__(weight, size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.binary_cross_entropy(input, target, weight=self.weight,
size_average=self.size_average,
reduce=self.reduce)
@@ -620,7 +608,6 @@ def __init__(self, size_average=True, reduce=True):
super(MultiLabelMarginLoss, self).__init__(size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.multilabel_margin_loss(input, target, size_average=self.size_average,
reduce=self.reduce)
@@ -672,7 +659,6 @@ def __init__(self, size_average=True, reduce=True):
super(SmoothL1Loss, self).__init__(size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.smooth_l1_loss(input, target, size_average=self.size_average,
reduce=self.reduce)
@@ -706,7 +692,6 @@ def __init__(self, size_average=True, reduce=True):
super(SoftMarginLoss, self).__init__(size_average, reduce)
def forward(self, input, target):
- _assert_no_grad(target)
return F.soft_margin_loss(input, target, size_average=self.size_average,
reduce=self.reduce)
@@ -789,7 +774,6 @@ def __init__(self, weight=None, size_average=True, ignore_index=-100, reduce=Tru
self.ignore_index = ignore_index
def forward(self, input, target):
- _assert_no_grad(target)
return F.cross_entropy(input, target, self.weight, self.size_average,
self.ignore_index, self.reduce)
| Inconsistency in _assert_no_grad between nn.functional and nn
In various losses, nn.functional does not have _assert_no_grad but Module does. I guess both have their advantages so maybe we can add a flag and make them uniform or document the fact that nn.functional can backpropogate through targets.
| A lot of those losses actually do return gradients for the targets now. I think the right fix here is to remove `_assert_no_grad `: autograd should warn if the gradients can't be computed for certain targets. | 2018-06-13T23:04:23 |
|
pytorch/pytorch | 8,462 | pytorch__pytorch-8462 | [
"7989"
] | dc209ed963afa10950c9f7ab7d73934683043f92 | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -516,6 +516,11 @@ def add_docstr_all(method, docstr):
Returns a copy of the :attr:`self` tensor. The copy has the same size and data
type as :attr:`self`.
+
+.. note::
+
+ Unlike `copy_()`, this function is recorded in the computation graph. Gradients
+ propagating to the cloned tensor will propagate to the original tensor.
""")
add_docstr_all('contiguous',
| [documentation request] 'clone' needs better documentation
[The current documentation for `Tensor.clone` is](https://pytorch.org/docs/stable/tensors.html?highlight=clone#torch.Tensor.clone):
> Returns a copy of the `self` tensor. The copy has the same size and data type as `self`.
This documentation fails to elucidate the fact that any gradient propagating through the cloned tensor will propagate to the original tensor. This is critical to the functionality of clone (and is why the method isn't called "copy"), and is just begging for newcomers to make hard-to-find mistakes.
| Thanks for the report, @rsokl. Please feel free to send a pr! | 2018-06-13T23:39:52 |
|
pytorch/pytorch | 8,543 | pytorch__pytorch-8543 | [
"8508"
] | b002aee0ff5eeea7172e4037b3d5936202cb6aef | diff --git a/torch/distributions/multivariate_normal.py b/torch/distributions/multivariate_normal.py
--- a/torch/distributions/multivariate_normal.py
+++ b/torch/distributions/multivariate_normal.py
@@ -119,6 +119,8 @@ class MultivariateNormal(Distribution):
has_rsample = True
def __init__(self, loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None):
+ if loc.dim() < 1:
+ loc = loc.unsqueeze(0)
event_shape = torch.Size(loc.shape[-1:])
if (covariance_matrix is not None) + (scale_tril is not None) + (precision_matrix is not None) != 1:
raise ValueError("Exactly one of covariance_matrix or precision_matrix or scale_tril may be specified.")
| Crash with SIGFPE due to unhandled cases in distributions.MultivariateNormal
## Issue description
With the scalar support in Tensor from PyTorch 0.4, `torch.distributions.MultivariateNormal` crashes if `loc` (mean of the distribution) is a scalar (0-dimensional Tensor) although such an input is currently valid . It neither raises a `ValueError` in `torch.distributions.MultivariateNormal.__init__` nor is caught by the `real_vector` constraint on the `loc` argument.
A minimal test code is below to reproduce the clueless SIGFPE crash.
## Code example
```python
#!/usr/bin/env python
"""
Script to test/reproduce crashes with SIGFPE due to unhandled cases(scalar loc) in distributions.MultivariateNormal
"""
import torch
def test_univariate_scalar_input(loc=0.5, variance=0.1):
mu = torch.tensor(loc)
sigma = torch.tensor(variance)
distribution = torch.distributions.MultivariateNormal(mu, torch.eye(1) * sigma)
sample = distribution.sample()
print(sample)
def test_univariate_scalar_input_with_args_validation(loc=0.5, variance=0.1):
mu = torch.tensor(loc)
sigma = torch.tensor(variance)
distribution = torch.distributions.MultivariateNormal(mu, torch.eye(1) * sigma, validate_args=True)
sample = distribution.sample()
print(sample)
def test_univariate_input(loc=([0.5]), variance=0.1):
mu = torch.tensor(loc)
sigma = torch.tensor(variance)
distribution = torch.distributions.MultivariateNormal(mu, torch.eye(1) * sigma)
sample = distribution.sample()
print(sample)
def test_univariate_input_with_args_validation(loc=([0.5]), variance=0.1):
mu = torch.tensor(loc)
sigma = torch.tensor(variance)
distribution = torch.distributions.MultivariateNormal(mu, torch.eye(1) * sigma, validate_args=True)
sample = distribution.sample()
print(sample)
if __name__ == "__main__":
test_univariate_scalar_input(loc=0.5, variance=0.1) # Crashes with Floating point exception (SIGFPE)
#test_univariate_scalar_input_with_args_validation(loc=0.5, variance=0.1) #Crashes with Floating point exception (SIGFPE)
#test_univariate_input(loc=([0.5]), variance=0.1) # Runs without errors. Haven't verified if samples are from the correct normal distribution
#test_univariate_input_with_args_validation(loc=([0.5]), variance=0.1) # Runs without errors. Haven't verified if samples are from the correct normal distribution
```
I will be happy to submit a PR if you think this needs a fix.
## System Info
- PyTorch or Caffe2: PyTorch
- How you installed PyTorch (conda, pip, source): conda
- Build command you used (if compiling from source): NA
- OS: Ubuntu 16.04
- PyTorch version: 0.4.0
- Python version: 3.5.5
- CUDA/cuDNN version: NA
- GPU models and configuration: NA
- GCC version (if compiling from source): NA
- CMake version: NA
- Versions of any other relevant libraries: NA
| The error occurs due to the `_batch_mv` in the `rsample` method. When `loc` is a scalar, the `eps` is `tensor([])`, which seems to be causing an issue.
https://github.com/pytorch/pytorch/blob/302408e6c225bdd0fe9c6af9108c95d10dfb6ce4/torch/distributions/multivariate_normal.py#L171-L174
cc: @fritzo @apaszke
Below is the gdb log with the backtrace:
```bash
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Program received signal SIGFPE, Arithmetic exception.
0x00007fffd1072cee in at::native::reshape(at::Tensor const&, at::ArrayRef<long>) ()
from /home/praveen/anaconda3/envs/DRL/lib/python3.5/site-packages/torch/lib/libATen.so
(gdb) bt
#0 0x00007fffd1072cee in at::native::reshape(at::Tensor const&, at::ArrayRef<long>) ()
from /home/praveen/anaconda3/envs/DRL/lib/python3.5/site-packages/torch/lib/libATen.so
#1 0x00007fffd12f5996 in at::Type::reshape(at::Tensor const&, at::ArrayRef<long>) const ()
from /home/praveen/anaconda3/envs/DRL/lib/python3.5/site-packages/torch/lib/libATen.so
#2 0x00007fffe791eef6 in torch::autograd::VariableType::reshape(at::Tensor const&, at::ArrayRef<long>) const ()
from /home/praveen/anaconda3/envs/DRL/lib/python3.5/site-packages/torch/_C.cpython-35m-x86_64-linux-gnu.so
#3 0x00007fffe7b68b5b in torch::autograd::THPVariable_reshape ()
from /home/praveen/anaconda3/envs/DRL/lib/python3.5/site-packages/torch/_C.cpython-35m-x86_64-linux-gnu.so
#4 0x00005555556a0718 in PyCFunction_Call ()
#5 0x00005555556f648c in PyEval_EvalFrameEx ()
#6 0x00005555556f6b40 in PyEval_EvalFrameEx ()
#7 0x00005555556fb2d0 in PyEval_EvalFrameEx ()
#8 0x00005555556fb2d0 in PyEval_EvalFrameEx ()
#9 0x00005555556fb2d0 in PyEval_EvalFrameEx ()
#10 0x0000555555700c3d in PyEval_EvalCodeEx ()
#11 0x0000555555701b6c in PyEval_EvalCode ()
#12 0x000055555575ed54 in run_mod ()
#13 0x00005555557603c1 in PyRun_FileExFlags ()
#14 0x00005555557605de in PyRun_SimpleFileExFlags ()
#15 0x0000555555760c8d in Py_Main ()
#16 0x000055555562c031 in main ()
```
I believe `loc` as scalar should not be allowed for `MultivariateNormal`; we should instead add a check
```py
def __init__(...):
if loc.dim() < 1:
raise ValueError
```
Alternatively, we could broadcast scalar `loc` up to a 1-dimensional tensor in `__init__()`.
Though one shouldn't try to use `MultivariateNormal` for a univariate, scalar `loc`, I would :+1: +1 for the alternative way as it does not add any overhead while making it user-friendly. I think this will be better compared to raising an exception which will most likely trigger users to add a fake dimension to the scalar and try which anyway works with `MultivariateNormal` on `master`.
What do you think? | 2018-06-15T01:39:25 |
|
pytorch/pytorch | 8,576 | pytorch__pytorch-8576 | [
"7435"
] | b10c94b5072f288ca915adb24fe1545ca64a773d | diff --git a/torch/nn/_functions/vision.py b/torch/nn/_functions/vision.py
--- a/torch/nn/_functions/vision.py
+++ b/torch/nn/_functions/vision.py
@@ -10,7 +10,10 @@
def grid_sampler(input, grid, padding_mode):
- if cudnn.is_acceptable(input.data) and padding_mode == 'zeros' and input.dim() == 4:
+ if (cudnn.is_acceptable(input.data) and
+ padding_mode == 'zeros' and
+ input.dim() == 4 and
+ input.size(1) <= 1024): # as of cudnn 7102, will not work for larger than 1024
return torch.cudnn_grid_sampler(input, grid)
else:
return GridSampler.apply(input, grid, padding_mode)
| CUDNN error when grid_sample() receives an input with more than 1024 channels
## Code example
```
import torch
x = torch.zeros(1, 1025, 7, 7).cuda() # 1024 works well
x.requires_grad = True
grid = torch.zeros(1, 3, 3, 2).cuda()
grid.requires_grad = True
# from torch.nn._functions.vision import GridSampler
# outputs = GridSampler.apply(x, grid, 'zeros') # CUDA version works well
outputs = torch.nn.functional.grid_sample(x, grid) # CUDNN version doesn't work
loss = torch.mean(outputs)
loss.backward()
```
## Error message
```
File "main.py", line 12,
loss.backward()
File "~/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "~/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
```
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
CMake version: version 3.9.4
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: TITAN X (Pascal)
GPU 1: TITAN X (Pascal)
Nvidia driver version: 384.98
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.0.5
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
/usr/local/MATLAB/R2016b/bin/glnxa64/libcudnn.so.4.0.7
/usr/local/lib/python2.7/dist-packages/torch/lib/libcudnn.so.6
/usr/local/lib/python3.5/dist-packages/torch/lib/libcudnn.so.6
Versions of relevant libraries:
[pip3] numpy (1.14.1)
[pip3] numpydoc (0.6.0)
[pip3] torch (0.4.0)
[pip3] torchvision (0.2.0)
[conda] cuda90 1.0 h6433d27_0 pytorch
[conda] magma-cuda90 2.3.0 1 soumith
[conda] pytorch 0.4.0 py36_cuda9.0.176_cudnn7.1.2_1 [cuda90] pytorch
[conda] torchvision 0.2.1 py36_1 pytorch
| torch.backends.cudnn.version() returns 7102
Possibly related: #7258
I'm not sure why, but I can't reproduce this on master. I can reproduce on 0.4 though.
Never mind, this error happens when CUDNN is updated regardless of pytorch version. | 2018-06-15T23:38:19 |
|
pytorch/pytorch | 8,578 | pytorch__pytorch-8578 | [
"7743"
] | 0a5fe55c9f26ef4cadf4a9350bcdeaea82d3e69f | diff --git a/tools/autograd/gen_variable_type.py b/tools/autograd/gen_variable_type.py
--- a/tools/autograd/gen_variable_type.py
+++ b/tools/autograd/gen_variable_type.py
@@ -364,14 +364,14 @@ def reference_args(args):
def get_trace_outputs(declaration):
if declaration['return_type'] == 'std::vector<Tensor>':
- return 'flatten({})'.format(declaration['returns'][0]['name'])
+ return 'flatten_tensor_args({})'.format(declaration['returns'][0]['name'])
elif name.endswith('_out'):
output_args = [arg['name'] for arg in arguments
if arg.get('output', False)]
return '{' + ', '.join(output_args) + '}'
trace_outs = [r['name'] for r in declaration['returns']]
if any(ret['dynamic_type'] == 'TensorList' for ret in declaration['returns']):
- return CodeTemplate("flatten( ${outs} )").substitute(outs=trace_outs)
+ return CodeTemplate("flatten_tensor_args( ${outs} )").substitute(outs=trace_outs)
else:
return CodeTemplate("{ ${outs} }").substitute(outs=trace_outs)
@@ -408,7 +408,7 @@ def emit_record_trace(env):
local['tensor_args'] = [arg['name'] for arg in tensor_args]
if any(arg['simple_type'] == 'TensorList' for arg in tensor_args):
# Allocate a temporary vector with flatten and pass it in
- local['trace_inputs'] = CodeTemplate("flatten( $tensor_args )").substitute(local)
+ local['trace_inputs'] = CodeTemplate("flatten_tensor_args( $tensor_args )").substitute(local)
else:
local['trace_inputs'] = CodeTemplate("{ ${tensor_args} }").substitute(local)
@@ -496,7 +496,7 @@ def emit_history():
fn = 'rebase' if modifies_arguments and not is_view else 'set'
output_names = [r['name'] for r in differentiable_outputs]
# TODO: flatten allocates a std::vector, which could be expensive
- outs = CodeTemplate("flatten( ${outs} )").substitute(outs=output_names)
+ outs = CodeTemplate("flatten_tensor_args( ${outs} )").substitute(outs=output_names)
return SET_HISTORY.substitute(fn=fn, differentiable_outputs=outs)
def emit_save_outputs():
diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -810,6 +810,13 @@ def add_docstr_all(method, docstr):
In-place version of :meth:`~Tensor.frac`
""")
+add_docstr_all('flatten',
+ r"""
+flatten(input, start_dim=0, end_dim=-1) -> Tensor
+
+see :func:`torch.flatten`
+""")
+
add_docstr_all('gather',
r"""
gather(dim, index) -> Tensor
diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -1591,6 +1591,30 @@ def parse_kwargs(desc):
array([-1, 2, 3])
""")
+add_docstr(torch.flatten,
+ r"""
+flatten(input, start_dim=0, end_dim=-1) -> Tensor
+
+Flattens a contiguous range of dims in a tensor.
+
+Args:
+ input (Tensor): the input tensor
+ start_dim (int): the first dim to flatten
+ end_dim (int): the last dim to flatten
+
+Example::
+
+ >>> t = torch.tensor([[[1, 2],
+ [3, 4]],
+ [[5, 6],
+ [7, 8]]])
+ >>> torch.flatten(t)
+ tensor([1, 2, 3, 4, 5, 6, 7, 8])
+ >>> torch.flatten(t, start_dim=1)
+ tensor([[1, 2, 3, 4],
+ [5, 6, 7, 8]])
+""")
+
add_docstr(torch.gather,
r"""
gather(input, dim, index, out=None) -> Tensor
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -5429,6 +5429,43 @@ def _fill_indices(self, idx, dim, dim_size, elems_per_row, m, n, o):
ii[dim] = slice(0, idx.size(dim) + 1)
idx[tuple(ii)] = torch.randperm(dim_size)[0:elems_per_row]
+ def test_flatten(self):
+ src = torch.randn(5, 5, 5, 5)
+ flat = src.flatten(0, -1)
+ self.assertEqual(flat.shape, torch.Size([625]))
+ self.assertEqual(src.view(-1), flat.view(-1))
+
+ flat = src.flatten(0, 2)
+ self.assertEqual(flat.shape, torch.Size([125, 5]))
+ self.assertEqual(src.view(-1), flat.view(-1))
+
+ flat = src.flatten(0, 1)
+ self.assertEqual(flat.shape, torch.Size([25, 5, 5]))
+ self.assertEqual(src.view(-1), flat.view(-1))
+
+ flat = src.flatten(1, 2)
+ self.assertEqual(flat.shape, torch.Size([5, 25, 5]))
+ self.assertEqual(src.view(-1), flat.view(-1))
+
+ flat = src.flatten(2, 3)
+ self.assertEqual(flat.shape, torch.Size([5, 5, 25]))
+ self.assertEqual(src.view(-1), flat.view(-1))
+
+ flat = src.flatten(-2, -1)
+ self.assertEqual(flat.shape, torch.Size([5, 5, 25]))
+ self.assertEqual(src.view(-1), flat.view(-1))
+
+ flat = src.flatten(2, 2)
+ self.assertEqual(flat, src)
+
+ # out of bounds index
+ with self.assertRaisesRegex(RuntimeError, 'dimension out of range'):
+ src.flatten(5, 10)
+
+ # invalid start and end
+ with self.assertRaisesRegex(RuntimeError, 'start_dim cannot come after end_dim'):
+ src.flatten(2, 0)
+
@staticmethod
def _test_gather(self, cast, test_bounds=True):
m, n, o = random.randint(10, 20), random.randint(10, 20), random.randint(10, 20)
| [pytorch] [feature request] Flatten convenience method
Minor suggestion (trivial to implement in user code, but having it in the library would improve code brevity). The purpose is to flatten specific trailing dimensions by passing negative dimension index.
Can be useful for aggregating across multiple trailing dimensions, before mean/max etc get multiple dimensions support.
Exists in numpy/tensorflow/onnx, but semantics there doesn't allow flattening only specific dimensions.
```python
def flatten(x, dim):
return x.view(x.size()[:dim] + (-1, ))
flatten(torch.rand(2,3,4,5,6), dim = -2).shape
# (2, 3, 4, 30)
```
| Seems nice, could it also flatten multiple dimensions in a contiguous range?
The snippet was just to flatten a few contiguous starting or trailing dimensions. If this is implemented into Core, maybe itβs better to rename `dim` to `multi_dims`? to align with recent multiple dimensions support in `sum`
Then maybe a naive support for `multi_dims` can be added to other ops like SoftMax by using `flatten` under the hood, at least if `multi_dims` contains contiguous dimensions (in actual memory layout). | 2018-06-16T00:32:48 |
pytorch/pytorch | 8,596 | pytorch__pytorch-8596 | [
"7884"
] | 372d1d67356f054db64bdfb4787871ecdbbcbe0b | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -12,7 +12,7 @@
from ._functions.padding import ConstantPadNd
from ._functions import vision
from ._functions.thnn.fold import Col2Im, Im2Col
-from .modules.utils import _single, _pair, _triple
+from .modules.utils import _single, _pair, _triple, _list_with_default
from . import grad
@@ -490,6 +490,7 @@ def adaptive_max_pool2d(input, output_size, return_indices=False):
double-integer tuple)
return_indices: whether to return pooling indices. Default: ``False``
"""
+ output_size = _list_with_default(output_size, input.size())
ret = torch._C._nn.adaptive_max_pool2d(input, output_size)
return ret if return_indices else ret[0]
@@ -505,6 +506,7 @@ def adaptive_max_pool3d(input, output_size, return_indices=False):
triple-integer tuple)
return_indices: whether to return pooling indices. Default: ``False``
"""
+ output_size = _list_with_default(output_size, input.size())
ret = torch._C._nn.adaptive_max_pool3d(input, output_size)
return ret if return_indices else ret[0]
@@ -521,35 +523,38 @@ def adaptive_max_pool3d(input, output_size, return_indices=False):
output_size: the target output size (single integer)
""")
-adaptive_avg_pool2d = _add_docstr(torch._C._nn.adaptive_avg_pool2d, r"""
-adaptive_avg_pool2d(input, output_size) -> Tensor
-Applies a 2D adaptive average pooling over an input signal composed of
-several input planes.
+def adaptive_avg_pool2d(input, output_size):
+ r"""
+ Applies a 2D adaptive average pooling over an input signal composed of
+ several input planes.
-See :class:`~torch.nn.AdaptiveAvgPool2d` for details and output shape.
+ See :class:`~torch.nn.AdaptiveAvgPool2d` for details and output shape.
-Args:
- output_size: the target output size (single integer or
- double-integer tuple)
-""")
+ Args:
+ output_size: the target output size (single integer or
+ double-integer tuple)
+ """
+ output_size = _list_with_default(output_size, input.size())
+ return torch._C._nn.adaptive_avg_pool2d(input, output_size)
-adaptive_avg_pool3d = _add_docstr(torch._C._nn.adaptive_avg_pool3d, r"""
-adaptive_avg_pool3d(input, output_size) -> Tensor
-Applies a 3D adaptive average pooling over an input signal composed of
-several input planes.
+def adaptive_avg_pool3d(input, output_size):
+ r"""
+ Applies a 3D adaptive average pooling over an input signal composed of
+ several input planes.
-See :class:`~torch.nn.AdaptiveAvgPool3d` for details and output shape.
+ See :class:`~torch.nn.AdaptiveAvgPool3d` for details and output shape.
-Args:
- output_size: the target output size (single integer or
- triple-integer tuple)
-""")
+ Args:
+ output_size: the target output size (single integer or
+ triple-integer tuple)
+ """
+ output_size = _list_with_default(output_size, input.size())
+ return torch._C._nn.adaptive_avg_pool3d(input, output_size)
# Activation functions
-
def dropout(input, p=0.5, training=False, inplace=False):
return _functions.dropout.Dropout.apply(input, p, training, inplace)
diff --git a/torch/nn/modules/utils.py b/torch/nn/modules/utils.py
--- a/torch/nn/modules/utils.py
+++ b/torch/nn/modules/utils.py
@@ -13,3 +13,11 @@ def parse(x):
_pair = _ntuple(2)
_triple = _ntuple(3)
_quadruple = _ntuple(4)
+
+
+def _list_with_default(out_size, defaults):
+ if isinstance(out_size, int):
+ return out_size
+ if len(defaults) <= len(out_size):
+ raise ValueError('Input dimension should be at least {}'.format(len(out_size) + 1))
+ return [v if v is not None else d for v, d in zip(out_size, defaults[-len(out_size):])]
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -2194,6 +2194,29 @@ def expected_output(dim):
indices.add_(1)
self.assertRaises(RuntimeError, lambda: output.backward(grad_output))
+ def test_adaptive_pooling_input_size(self):
+ for numel in (2, 3):
+ for pool_type in ('Max', 'Avg'):
+ cls_name = 'Adaptive{}Pool{}d'.format(pool_type, numel)
+ module_cls = getattr(nn, cls_name)
+ output_size = (2,) * numel
+ module = module_cls(output_size)
+
+ input = torch.randn(output_size)
+ self.assertRaises(ValueError, lambda: module(input))
+
+ def test_adaptive_pooling_size_none(self):
+ for numel in (2, 3):
+ for pool_type in ('Max', 'Avg'):
+ cls_name = 'Adaptive{}Pool{}d'.format(pool_type, numel)
+ module_cls = getattr(nn, cls_name)
+ output_size = (2,) * (numel - 1) + (None,)
+ module = module_cls(output_size)
+
+ input = torch.randn((4,) * (numel + 1))
+ output = module(input)
+ self.assertEqual(output.size(), (4,) + (2,) * (numel - 1) + (4,))
+
def test_Conv2d_naive_groups(self):
self._test_Conv2d_naive_groups()
@@ -7206,6 +7229,12 @@ def multimarginloss_weights_no_reduce_test():
input_fn=lambda: _rand_tensor_non_equal(1, 3, 5, 6),
desc='tuple',
),
+ dict(
+ module_name='AdaptiveMaxPool2d',
+ constructor_args=((3, None),),
+ input_fn=lambda: _rand_tensor_non_equal(1, 3, 5, 6),
+ desc='tuple_none',
+ ),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=(3,),
@@ -7218,6 +7247,12 @@ def multimarginloss_weights_no_reduce_test():
input_fn=lambda: _rand_tensor_non_equal(2, 3, 5, 6, 7),
desc='tuple',
),
+ dict(
+ module_name='AdaptiveMaxPool3d',
+ constructor_args=((3, None, 5),),
+ input_fn=lambda: _rand_tensor_non_equal(2, 3, 5, 6, 7),
+ desc='tuple_none',
+ ),
dict(
module_name='AdaptiveMaxPool3d',
constructor_args=(3,),
@@ -7247,6 +7282,12 @@ def multimarginloss_weights_no_reduce_test():
input_fn=lambda: torch.rand(1, 3, 5, 6),
desc='tuple',
),
+ dict(
+ module_name='AdaptiveAvgPool2d',
+ constructor_args=((3, None),),
+ input_fn=lambda: torch.rand(1, 3, 5, 6),
+ desc='tuple_none',
+ ),
dict(
module_name='AdaptiveAvgPool3d',
constructor_args=(3,),
@@ -7259,6 +7300,12 @@ def multimarginloss_weights_no_reduce_test():
input_fn=lambda: torch.rand(2, 3, 5, 3, 7),
desc='tuple',
),
+ dict(
+ module_name='AdaptiveAvgPool3d',
+ constructor_args=((None, 4, 5),),
+ input_fn=lambda: torch.rand(2, 3, 5, 3, 7),
+ desc='tuple_none',
+ ),
dict(
module_name='SELU',
input_size=(3, 2, 5),
| [pytorch] A bug for torch.nn.AdaptiveMaxPool2d
## Issue description
I run the example code of pytorch, but there was a bug.
```
>>> # target output size of 10x7
>>> m = nn.AdaptiveMaxPool2d((None, 7))
>>> input = torch.randn(1, 64, 10, 9)
>>> output = m(input)
```
When i used "nn.AdaptiveMaxPool2d((10, 7))", it was OK.
## Code example
```
In [17]: input = torch.randn(1, 64, 10, 9)
In [18]: m = nn.AdaptiveMaxPool2d((None, 7))
In [19]: output = m(input)
```
## System Info
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-19-0b450ddad2f3> in <module>()
----> 1 output = m(input)
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
--> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/pooling.pyc in forward(self, input)
848
849 def forward(self, input):
--> 850 return F.adaptive_max_pool2d(input, self.output_size, self.return_indices)
851
852
/usr/local/lib/python2.7/dist-packages/torch/nn/functional.pyc in adaptive_max_pool2d(input, output_size, return_indices)
489 return_indices: whether to return pooling indices. Default: ``False``
490 """
--> 491 ret = torch._C._nn.adaptive_max_pool2d(input, output_size)
492 return ret if return_indices else ret[0]
493
TypeError: an integer is required
```
## Code example
```
In [20]: input = torch.randn(1, 64, 10, 9)
In [21]: m = nn.AdaptiveMaxPool2d((10, 7))
In [22]: output = m(input)
In [23]: output.size()
```
## System Info
`Out[23]: torch.Size([1, 64, 10, 7])`
| I can repro this on master branch. Will work on a fix. | 2018-06-18T04:49:12 |
pytorch/pytorch | 8,619 | pytorch__pytorch-8619 | [
"8282"
] | 0a5fe55c9f26ef4cadf4a9350bcdeaea82d3e69f | diff --git a/torch/nn/modules/module.py b/torch/nn/modules/module.py
--- a/torch/nn/modules/module.py
+++ b/torch/nn/modules/module.py
@@ -629,6 +629,13 @@ def _load_from_state_dict(self, state_dict, prefix, strict, missing_keys, unexpe
key = prefix + name
if key in state_dict:
input_param = state_dict[key]
+
+ if input_param.shape != param.shape:
+ # local shape should match the one in checkpoint
+ error_msgs.append('Size mismatch: copying a param of {} from checkpoint, '
+ 'where the shape is {} in current model.'
+ .format(param.shape, input_param.shape))
+
if isinstance(input_param, Parameter):
# backwards compatibility for serialized parameters
input_param = input_param.data
| Loaded network with load_state_dict has different shape but works anyway
After it was verified on [discuss.pytorch](https://discuss.pytorch.org/t/loaded-network-has-different-shape-but-works-anyway/19398) that this is indeed unwanted behaviour, I am forwarding this to you:
## Issue description
I trained a model with among others had the following layer:
`final_layer.append(nn.Conv2d(64, 1, kernel_size=1))`
and then saved it to a file with state_dict and torch.save.
Then, when I wanted to load that model using load_state_dict, by accident the same layer was setup as follows:
`final_layer.append(nn.Conv2d(64, 32, kernel_size=1))`
Nevertheless, the model was loaded without error. It seemed that the weights were just duplicated 32 times, but I have not verified. So the question is how this is consistent with API documentation. I have not found a statement that says load_state_dict would somehow fix shape inconsistencies automatically. It seems you have a documentation vs reality mismatch here. (Now you need to decide which one to fix)
## Code example
```
path = './test_model.pth'
model = nn.Conv2d(64, 1, 3, 1, 1)
torch.save(model.state_dict(), path)
model = nn.Conv2d(64, 32, 3, 1, 1)
model.load_state_dict(torch.load(path))
for w in model.weight[:, :, 0, 0]:
print(w)
```
Provided by discuss .pytorch user ptrblck
## System Info
pytorch 0.4 release
| we should check for exact shape match at https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L636 | 2018-06-18T21:00:33 |
|
pytorch/pytorch | 8,633 | pytorch__pytorch-8633 | [
"8077"
] | 2289815fc33562f41e7e1cd3b0d634074f44b94c | diff --git a/torch/nn/parameter.py b/torch/nn/parameter.py
--- a/torch/nn/parameter.py
+++ b/torch/nn/parameter.py
@@ -25,3 +25,6 @@ def __new__(cls, data=None, requires_grad=True):
def __repr__(self):
return 'Parameter containing:\n' + super(Parameter, self).__repr__()
+
+ def __reduce_ex__(self, proto):
+ return Parameter, (super(Parameter, self), self.requires_grad)
| diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -6153,6 +6153,30 @@ def test_pickle(self):
b = pickle.loads(serialized)
self.assertEqual(a, b)
+ def test_pickle_parameter(self):
+ if sys.version_info[0] == 2:
+ import cPickle as pickle
+ else:
+ import pickle
+ a = torch.nn.Parameter(torch.randn(5, 5))
+ serialized = pickle.dumps(a)
+ b = pickle.loads(serialized)
+ self.assertTrue(isinstance(b, torch.nn.Parameter))
+ self.assertEqual(a.requires_grad, b.requires_grad)
+ self.assertEqual(a, b)
+
+ def test_pickle_parameter_no_requires_grad(self):
+ if sys.version_info[0] == 2:
+ import cPickle as pickle
+ else:
+ import pickle
+ a = torch.nn.Parameter(torch.randn(5, 5), requires_grad=False)
+ serialized = pickle.dumps(a)
+ b = pickle.loads(serialized)
+ self.assertTrue(isinstance(b, torch.nn.Parameter))
+ self.assertEqual(a.requires_grad, b.requires_grad)
+ self.assertEqual(a, b)
+
def test_norm_fastpaths(self):
x = torch.randn(3, 5)
| [Bug] Parameters turn into Tensors after load
## Issue description
After a model is saved and then loaded using torch.save / load method, an attribute that was a `Parameter` becomes a `Tensor`
## Code example
```
import torch
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.B = torch.nn.Parameter(torch.Tensor())
class A(torch.nn.Module):
def __init__(self):
super().__init__()
self.m = M()
a = A()
print(type(a.m.B))
torch.save(a, 'test')
a = torch.load('test')
print(type(a.m.B))
```
The output is
```
<class 'torch.nn.parameter.Parameter'>
<class 'torch.Tensor'>
```
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 8.0.61
OS: Debian GNU/Linux 9.3 (stretch)
GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
CMake version: version 3.7.2
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: GeForce GTX 1080
GPU 1: GeForce GTX 1080
Nvidia driver version: 384.98
cuDNN version: Probably one of the following:
/usr/local/cuda-7.5/lib64/libcudnn.so.7.0.64
/usr/local/cuda-7.5/lib64/libcudnn_static.a
/usr/local/cuda-9.0/lib64/libcudnn.so.7.0.3
/usr/local/cuda-9.0/lib64/libcudnn_static.a
/usr/local/lib/python3.5/dist-packages/torch/lib/libcudnn-900fef33.so.7.0.5
/usr/local/matlab90/bin/glnxa64/libcudnn.so.7.0.64
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
- How you installed PyTorch: pip
| Using **a.m.B.type()** instead of **type(a.m.B)** should give you the same output before saving and after loading, i.e.: torch.FloatTensor.
I think the former is the recommended way to check a Tensor's type.
this is a legit bug, we should fix this. | 2018-06-19T00:32:03 |
pytorch/pytorch | 8,636 | pytorch__pytorch-8636 | [
"7933"
] | 271406f276ef878b43ae5cdc1bf286c8d2f959d2 | diff --git a/torch/_torch_docs.py b/torch/_torch_docs.py
--- a/torch/_torch_docs.py
+++ b/torch/_torch_docs.py
@@ -1332,7 +1332,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the tensor to compare
other (Tensor or float): the tensor or value to compare
- out (Tensor, optional): the output tensor. Must be a `ByteTensor` or the same type as `input`.
+ out (Tensor, optional): the output tensor. Must be a `ByteTensor`
Returns:
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
@@ -1599,7 +1599,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the tensor to compare
other (Tensor or float): the tensor or value to compare
- out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
+ out (Tensor, optional): the output tensor that must be a `ByteTensor`
Returns:
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
@@ -1825,7 +1825,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the tensor to compare
other (Tensor or float): the tensor or value to compare
- out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
+ out (Tensor, optional): the output tensor that must be a `ByteTensor`
Returns:
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
@@ -1986,7 +1986,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the tensor to compare
other (Tensor or float): the tensor or value to compare
- out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
+ out (Tensor, optional): the output tensor that must be a `ByteTensor`
Returns:
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true
@@ -2230,7 +2230,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the tensor to compare
other (Tensor or float): the tensor or value to compare
- out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as :attr:`input`
+ out (Tensor, optional): the output tensor that must be a `ByteTensor`
Returns:
Tensor: A `torch.ByteTensor` containing a 1 at each location where comparison is true
@@ -2794,7 +2794,7 @@ def parse_kwargs(desc):
Args:
input (Tensor): the tensor to compare
other (Tensor or float): the tensor or value to compare
- out (Tensor, optional): the output tensor that must be a `ByteTensor` or the same type as `input`
+ out (Tensor, optional): the output tensor that must be a `ByteTensor`
Returns:
Tensor: A ``torch.ByteTensor`` containing a 1 at each location where comparison is true.
| Comparison operators don't accept out typed as one of the inputs
[pytorch]
The docs say that for tensor comparison operators (gt,lt etc) it should be possible to pass out argument typed as input (https://pytorch.org/docs/master/torch.html?highlight=torch%20gt#torch.gt), yet when I try to do it, I hit an error
```
import torch
a=torch.randn(5,5)
b=torch.randn(5,5)
mask = torch.ByteTensor(a.size())
torch.gt(a,b,out=mask)#works
mask = torch.empty_like(a)
torch.gt(a,b,out=mask) #does not work
```
```
RuntimeError: Expected object of type torch.ByteTensor but found type torch.FloatTensor for argument #0 'result'
```
Should the docs be fixed, or is it a bug?
| If it's really useful we can add it back; but we'll fix the docs for now | 2018-06-19T03:15:25 |
|
pytorch/pytorch | 8,663 | pytorch__pytorch-8663 | [
"8659"
] | 5ca4f5b43b63882794951aec8fcb81a7595414d0 | diff --git a/torch/autograd/gradcheck.py b/torch/autograd/gradcheck.py
--- a/torch/autograd/gradcheck.py
+++ b/torch/autograd/gradcheck.py
@@ -3,6 +3,7 @@
import torch.testing
import sys
from itertools import product
+import warnings
def zero_gradients(x):
@@ -163,6 +164,12 @@ def gradcheck(func, inputs, eps=1e-6, atol=1e-5, rtol=1e-3, raise_exception=True
# Make sure that gradients are saved for all inputs
for inp in tupled_inputs:
if isinstance(inp, torch.Tensor):
+ if inp.requires_grad and inp.dtype != torch.float64:
+ warnings.warn(
+ 'At least one of the inputs that requires gradient \
+ is not of double precision floating point. '
+ 'This check will likely fail if all the inputs are not of \
+ double precision floating point. ')
inp.retain_grad()
output = _differentiable_outputs(func(*inputs))
| warn/error when using gradcheck with < float64 precision
Should prevent error usages like https://github.com/pytorch/pytorch/issues/8649
| 2018-06-19T19:50:38 |
||
pytorch/pytorch | 8,699 | pytorch__pytorch-8699 | [
"8692",
"8692"
] | b492d103ee7611525c89e0c115a13ebe31fa0be4 | diff --git a/torch/nn/init.py b/torch/nn/init.py
--- a/torch/nn/init.py
+++ b/torch/nn/init.py
@@ -140,7 +140,7 @@ def eye_(tensor):
raise ValueError("Only tensors with 2 dimensions are supported")
with torch.no_grad():
- torch.eye(*tensor.shape, out=tensor)
+ torch.eye(*tensor.shape, out=tensor, requires_grad=tensor.requires_grad)
return tensor
| nn.init.eye_ sets requires_grad to False on tensors
repro:
```
import torch.nn as nn
L = nn.Linear(5,5)
nn.init.eye_(L.weight)
print(L.weight.requires_grad) # False
```
Initially reported from https://discuss.pytorch.org/t/since-0-4-using-nn-init-eye-disables-gradient/19999/2
nn.init.eye_ sets requires_grad to False on tensors
repro:
```
import torch.nn as nn
L = nn.Linear(5,5)
nn.init.eye_(L.weight)
print(L.weight.requires_grad) # False
```
Initially reported from https://discuss.pytorch.org/t/since-0-4-using-nn-init-eye-disables-gradient/19999/2
| 2018-06-20T17:52:58 |
||
pytorch/pytorch | 8,721 | pytorch__pytorch-8721 | [
"8626"
] | 731273b8d61dfa2aa8b2909f27c8810ede103952 | diff --git a/torch/autograd/gradcheck.py b/torch/autograd/gradcheck.py
--- a/torch/autograd/gradcheck.py
+++ b/torch/autograd/gradcheck.py
@@ -127,7 +127,8 @@ def _differentiable_outputs(x):
def gradcheck(func, inputs, eps=1e-6, atol=1e-5, rtol=1e-3, raise_exception=True):
r"""Check gradients computed via small finite differences against analytical
- gradients
+ gradients w.r.t. tensors in :attr:`inputs` that are of floating point type
+ and with ``requires_grad=True``.
The check between numerical and analytical gradients has the same behaviour as
`numpy.allclose <https://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html>`_,
@@ -142,8 +143,15 @@ def gradcheck(func, inputs, eps=1e-6, atol=1e-5, rtol=1e-3, raise_exception=True
.. note::
The default values are designed for :attr:`input` of double precision.
- This check will likely fail if :attr:`input` is of single precision,
- i.e., ``FloatTensor``.
+ This check will likely fail if :attr:`input` is of less precision, e.g.,
+ ``FloatTensor``.
+
+ .. warning::
+ If any checked tensor in :attr:`input` has overlapping memory, i.e.,
+ different indices pointing to the same memory address (e.g., from
+ :func:`torch.expand`), this check will likely fail because the numerical
+ gradients computed by point perturbation at such indices will change
+ values at all other indices that share the same memory address.
Args:
func (function): a Python function that takes Tensor inputs and returns
@@ -227,7 +235,9 @@ def fn(input):
def gradgradcheck(func, inputs, grad_outputs=None, eps=1e-6, atol=1e-5, rtol=1e-3,
gen_non_contig_grad_outputs=False, raise_exception=True):
r"""Check gradients of gradients computed via small finite differences
- against analytical gradients
+ against analytical gradients w.r.t. tensors in :attr:`inputs` and
+ :attr:`grad_outputs` that are of floating point type and with
+ ``requires_grad=True``.
This function checks that backpropagating through the gradients computed
to the given :attr:`grad_outputs` are correct.
@@ -244,9 +254,17 @@ def gradgradcheck(func, inputs, grad_outputs=None, eps=1e-6, atol=1e-5, rtol=1e-
gradient :math:`n`.
.. note::
- The default values are designed for :attr:`input` of double precision.
- This check will likely fail if :attr:`input` is of single precision,
- i.e., ``FloatTensor``.
+ The default values are designed for :attr:`input` and
+ :attr:`grad_outputs` of double precision. This check will likely fail if
+ they are of less precision, e.g., ``FloatTensor``.
+
+ .. warning::
+ If any checked tensor in :attr:`input` and :attr:`grad_outputs` has
+ overlapping memory, i.e., different indices pointing to the same memory
+ address (e.g., from :func:`torch.expand`), this check will likely fail
+ because the numerical gradients computed by point perturbation at such
+ indices will change values at all other indices that share the same
+ memory address.
Args:
func (function): a Python function that takes Tensor inputs and returns
| diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -2053,13 +2053,42 @@ def test_dir(self):
self.assertTrue(hasattr(x, key))
def test_as_strided(self):
- x = Variable(torch.arange(0., 25).view(5, 5), requires_grad=True)
- def as_strided(x):
- return x.as_strided([3, 3], [6, 2], 2)
+ def test(x, repro_fn, *args):
+ def closure(x):
+ if repro_fn is not None:
+ x = repro_fn(x)
+ return x.as_strided(*args)
- gradcheck(as_strided, [x], raise_exception=True)
- gradgradcheck(as_strided, [x], [torch.randn(3, 3)])
+ x = x.to(torch.double).detach().requires_grad_()
+ gradcheck(closure, [x])
+ gradgradcheck(closure, [x])
+
+ # test
+ test(torch.arange(0, 25), lambda x: x.view(5, 5), [3, 3], [6, 2], 2)
+
+ # test crazy stride at dim with size 1 case
+ test(torch.randn(10), None, [1, 2, 1, 5], [0, 5, 100, 1], 2)
+
+ # test expand case
+ test(torch.randn(5), None, [3, 3, 3], [0, 1, 0], 2)
+ test(torch.randn(5), None, [3, 3, 3], [0, 0, 0], 4)
+ test(torch.randn(5), lambda x: x.expand(5, 5), [5, 5], [0, 1], 0)
+
+ # test non-expand overlapping case
+ test(torch.randn(35), None, [6, 6], [5, 1], 2)
+ test(torch.randn(15), None, [3, 2], [3, 6], 2)
+
+ # test transpose case
+ test(torch.randn(3, 4), None, [4, 3], [1, 4])
+
+ # test "getting things outside the input" case
+ x = torch.randn(6, 2)
+ test(x[3:], None, [3, 2], [2, 1], 0) # should be all zeros
+ self.assertEqual(x[3:].as_strided([3, 2], [2, 1], 0), x[:3])
+
+ # test select on expanded input case
+ test(torch.randn(2, 3), lambda x: x.expand(10, 2, 3), [2, 3], [3, 1], 0)
def _test_where_functional(self, t):
x = Variable(t(torch.randn(5, 5)), requires_grad=True)
@@ -2334,13 +2363,13 @@ def backward(ctx, gO):
inp = torch.rand(size, requires_grad=True)
out = MyFunc.apply(inp, inp, True)
- with self.assertRaisesRegexp(RuntimeError, "Function 'MyFuncBackward' returned nan values in its 0th output."):
+ with self.assertRaisesRegex(RuntimeError, "Function 'MyFuncBackward' returned nan values in its 0th output."):
with detect_anomaly():
out.backward()
inp = torch.rand(size, requires_grad=True)
out = MyFunc.apply(inp, inp, False)
- with self.assertRaisesRegexp(RuntimeError, "Function 'MyFuncBackward' returned nan values in its 1th output."):
+ with self.assertRaisesRegex(RuntimeError, "Function 'MyFuncBackward' returned nan values in its 1th output."):
with detect_anomaly():
out.backward()
| as_strided_backward in expanded case & dynamically created grad_fn for views
```py
>>> x = torch.zeros(2, requires_grad=True)
>>> xx = x.expand(3, 2)
>>> z = torch.randn(3, 2)
>>> torch.autograd.grad((xx * z).mean(), x)[0]
tensor([ 0.4419, -0.1242])
>>> torch.autograd.grad((xx.as_strided([3,2], xx.stride()) * z).mean(), x)[0] # reshape(3, 2) works too
tensor([ 0.5057, -0.2912])
```
| This is also kinda hard to fix because of the potential one-to-many mapping :/ Still thinking good solutions. | 2018-06-20T23:00:46 |
pytorch/pytorch | 8,743 | pytorch__pytorch-8743 | [
"8485"
] | 53c0de57d90130b059eede369b87016d6a440ff7 | diff --git a/tools/setup_helpers/cudnn.py b/tools/setup_helpers/cudnn.py
--- a/tools/setup_helpers/cudnn.py
+++ b/tools/setup_helpers/cudnn.py
@@ -1,7 +1,7 @@
import os
import glob
-from .env import IS_WINDOWS, IS_CONDA, CONDA_DIR, check_negative_env_flag, gather_paths
+from .env import IS_WINDOWS, IS_CONDA, CONDA_DIR, check_negative_env_flag, gather_paths, lib_paths_from_base
from .cuda import USE_CUDA, CUDA_HOME
@@ -13,10 +13,8 @@
if USE_CUDA and not check_negative_env_flag('USE_CUDNN'):
lib_paths = list(filter(bool, [
- os.getenv('CUDNN_LIB_DIR'),
- os.path.join(CUDA_HOME, 'lib/x64'),
- os.path.join(CUDA_HOME, 'lib'),
- os.path.join(CUDA_HOME, 'lib64'),
+ os.getenv('CUDNN_LIB_DIR')
+ ] + lib_paths_from_base(CUDA_HOME) + [
'/usr/lib/x86_64-linux-gnu/',
'/usr/lib/powerpc64le-linux-gnu/',
'/usr/lib/aarch64-linux-gnu/',
@@ -34,6 +32,7 @@
'C_INCLUDE_PATH',
'CPLUS_INCLUDE_PATH',
])))
+ # Add CUDA related dirs to candidate list
if IS_CONDA:
lib_paths.append(os.path.join(CONDA_DIR, 'lib'))
include_paths.append(os.path.join(CONDA_DIR, 'include'))
@@ -55,6 +54,13 @@
if CUDNN_INCLUDE_VERSION is None:
pass
+
+ # Check for standalone cuDNN libraries
+ if CUDNN_INCLUDE_DIR is not None:
+ cudnn_path = os.path.join(os.path.dirname(CUDNN_INCLUDE_DIR))
+ cudnn_lib_paths = lib_paths_from_base(cudnn_path)
+ lib_paths.extend(cudnn_lib_paths)
+
for path in lib_paths:
if path is None or not os.path.exists(path):
continue
diff --git a/tools/setup_helpers/env.py b/tools/setup_helpers/env.py
--- a/tools/setup_helpers/env.py
+++ b/tools/setup_helpers/env.py
@@ -22,4 +22,8 @@ def check_negative_env_flag(name, default=''):
def gather_paths(env_vars):
- return list(chain(*(os.getenv(v, '').split(':') for v in env_vars)))
+ return list(chain(*(os.getenv(v, '').split(os.pathsep) for v in env_vars)))
+
+
+def lib_paths_from_base(base_path):
+ return [os.path.join(base_path, s) for s in ['lib/x64', 'lib', 'lib64']]
| Build Error lib.obj not found
Dear community,
i encountered a build error i can not resolve. I'm trying to build pytorch with the specs below. The error i'm encountering is this one:

## Issue description
Obviously it's lacking a dubious lib.obj for linking. The question is why it is searching for it in the standard Nvidea folder. Shouldn't the files compiled not be safed somewhere else. Furthermore there is no other error before this one in the build process.
Can anyone help me with what has gone wrong here. I followed the instructions of the regular link:
[pytorch from source](https://github.com/pytorch/pytorch#from-source)
Best regards
## System Info
- PyTorch
- How you installed PyTorch (conda, pip, source): Source
- Build command you used (if compiling from source): python setup.py install
- OS: Windows 10
- PyTorch version: 0.4
- Python version: 3.6
- CUDA/cuDNN version: 9.0
- GPU models and configuration: GTX 1070
- Visual Studio 17 (15.7.3)
- CMake version: 3.11.1
| cc @peterjc123 do you know what's up here?
@Vedaevolution Could you please send me the complete log?
@peterjc123 Sure i can. [Get it here](https://drive.google.com/open?id=1A5_l4TVpiVNlkdrjKh2yQxMfpeNV21ir)
@Vedaevolution And your `CMake` log files, please?
It seems the FindCUDNN is bringing some wrong flags in the linking phase of `caffe2`.
Sorry for the inconvenience, those should be the ones:
- [Error Log](https://drive.google.com/open?id=1ZkCmHDhtKe-nAVQEo-ChAMZUvThVWUQN)
- [Output Log](https://drive.google.com/open?id=12ZOdl-3sne6adDxyG7vpZdsbnec3crPv)
And what do i need to change?
It seems the `caffe2::cudnn IMPORTED_LOCATION` is not properly set by the `find_library` in `cmake\Modules\FindCuDNN.cmake` L33-39? @Vedaevolution, did you manually set the environmental variable `CUDNN_LIBRARY`? If not, then it's something wrong with the CMake script there. | 2018-06-21T13:43:05 |
|
pytorch/pytorch | 8,841 | pytorch__pytorch-8841 | [
"8840",
"8840"
] | a5df8ec8413e482526de603231ee3e7d55c5ef8d | diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py
--- a/torch/nn/modules/loss.py
+++ b/torch/nn/modules/loss.py
@@ -172,7 +172,7 @@ class NLLLoss(_WeightedLoss):
>>> data = torch.randn(N, 16, 10, 10)
>>> m = nn.Conv2d(16, C, (3, 3))
>>> # each element in target has to have 0 <= value < C
- >>> target = torch.tensor(N, 8, 8).random_(0, C)
+ >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
>>> output = loss(m(data), target)
>>> output.backward()
"""
| nn.NLLLoss example throws error
## `nn.NLLLoss` example does not work in current stable and master doc.
Reported [in this thread](https://discuss.pytorch.org/t/help-understanding-nn-nllloss-example/20153).
The second example of `nn.NLLLoss` from [the docs](https://pytorch.org/docs/master/nn.html?#torch.nn.NLLLoss) throws an error on creating the `target` `tensor`:
```python
# 2D loss example (used, for example, with image inputs)
N, C = 5, 4
loss = nn.NLLLoss()
# input is of size N x C x height x width
data = torch.randn(N, 16, 10, 10)
m = nn.Conv2d(16, C, (3, 3))
# each element in target has to have 0 <= value < C
target = torch.tensor(N, 8, 8).random_(0, C)
> TypeError: tensor() takes 1 positional argument but 3 were given
output = loss(m(data), target)
output.backward()
```
Tested under `0.4.0` and `0.5.0a0+e62c3a4`.
The fix could be:
```python
target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
```
I'm creating a pull request to fix this issue.
nn.NLLLoss example throws error
## `nn.NLLLoss` example does not work in current stable and master doc.
Reported [in this thread](https://discuss.pytorch.org/t/help-understanding-nn-nllloss-example/20153).
The second example of `nn.NLLLoss` from [the docs](https://pytorch.org/docs/master/nn.html?#torch.nn.NLLLoss) throws an error on creating the `target` `tensor`:
```python
# 2D loss example (used, for example, with image inputs)
N, C = 5, 4
loss = nn.NLLLoss()
# input is of size N x C x height x width
data = torch.randn(N, 16, 10, 10)
m = nn.Conv2d(16, C, (3, 3))
# each element in target has to have 0 <= value < C
target = torch.tensor(N, 8, 8).random_(0, C)
> TypeError: tensor() takes 1 positional argument but 3 were given
output = loss(m(data), target)
output.backward()
```
Tested under `0.4.0` and `0.5.0a0+e62c3a4`.
The fix could be:
```python
target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)
```
I'm creating a pull request to fix this issue.
| 2018-06-25T05:16:46 |
||
pytorch/pytorch | 8,884 | pytorch__pytorch-8884 | [
"8693"
] | 7a614799f7289af4011a931dca2acec6dbacebbb | diff --git a/torch/nn/functional.py b/torch/nn/functional.py
--- a/torch/nn/functional.py
+++ b/torch/nn/functional.py
@@ -1902,6 +1902,8 @@ def grid_sample(input, grid, mode='bilinear', padding_mode='zeros'):
output (Tensor): output Tensor
"""
+ if mode != 'bilinear':
+ raise NotImplementedError("nn.functional.grid_sample got unsupported mode: '{}'".format(mode))
return vision.grid_sampler(input, grid, padding_mode)
| diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -4574,6 +4574,10 @@ def test_cosine_similarity(self):
input2 = torch.randn(input_size, requires_grad=True)
self.assertEqual(F.cosine_similarity(input1, input2, dim=1).size(), expected_size)
+ def test_grid_sample_unsupported_mode(self):
+ with self.assertRaisesRegex(NotImplementedError, "nn.functional.grid_sample got unsupported mode: 'garbage'"):
+ F.grid_sample(torch.tensor([]), torch.tensor([]), mode='garbage')
+
def test_grid_sample(self):
def test_cpu_against_cuda(N, C, H, W, padding_mode):
def test_shape(N, C, IH, IW, H, W, padding_mode):
| torch.nn.functional.grid_sample should throw error for invalid mode
The only valid mode is 'bilinear', but one can pass anything to it and it will not complain:
```
torch.nn.functional.grid_sample(array_5d, indices, mode='anything')
```
| Apparently, the `mode` is not passed to the backend function for `grid_sample`. I think it should be removed.
Yes. I think the idea is that we'd like to support other modes at some point. | 2018-06-26T01:00:30 |
pytorch/pytorch | 9,691 | pytorch__pytorch-9691 | [
"9404",
"4176"
] | f07e550b089e82a8dd40540aa6325f6dd2a086ca | diff --git a/torch/_tensor_docs.py b/torch/_tensor_docs.py
--- a/torch/_tensor_docs.py
+++ b/torch/_tensor_docs.py
@@ -1743,8 +1743,8 @@ def callable(a, b) -> number
Writes all values from the tensor :attr:`src` into :attr:`self` at the indices
specified in the :attr:`index` tensor. For each value in :attr:`src`, its output
-index is specified by its index in :attr:`src` for dimension != :attr:`dim` and
-by the corresponding value in :attr:`index` for dimension = :attr:`dim`.
+index is specified by its index in :attr:`src` for ``dimension != dim`` and by
+the corresponding value in :attr:`index` for ``dimension = dim``.
For a 3-D tensor, :attr:`self` is updated as::
@@ -1754,14 +1754,14 @@ def callable(a, b) -> number
This is the reverse operation of the manner described in :meth:`~Tensor.gather`.
-:attr:`self`, :attr:`index` and :attr:`src` should have same number of
-dimensions. It is also required that `index.size(d) <= src.size(d)` for all
-dimensions `d`, and that `index.size(d) <= self.size(d)` for all dimensions
-`d != dim`.
+:attr:`self`, :attr:`index` and :attr:`src` (if it is a Tensor) should have same
+number of dimensions. It is also required that ``index.size(d) <= src.size(d)``
+for all dimensions ``d``, and that ``index.size(d) <= self.size(d)`` for all
+dimensions ``d != dim``.
Moreover, as for :meth:`~Tensor.gather`, the values of :attr:`index` must be
-between `0` and `(self.size(dim) -1)` inclusive, and all values in a row along
-the specified dimension :attr:`dim` must be unique.
+between ``0`` and ``self.size(dim) - 1`` inclusive, and all values in a row
+along the specified dimension :attr:`dim` must be unique.
Args:
dim (int): the axis along which to index
@@ -1785,6 +1785,50 @@ def callable(a, b) -> number
[ 0.0000, 0.0000, 0.0000, 1.2300]])
""")
+add_docstr_all('scatter_add_',
+ r"""
+scatter_add_(dim, index, other) -> Tensor
+
+Adds all values from the tensor :attr:`other` into :attr:`self` at the indices
+specified in the :attr:`index` tensor in a similar fashion as
+:meth:`~torch.Tensor.scatter_`. For each value in :attr:`other`, it is added to
+an index in :attr:`self` which is specified by its index in :attr:`other`
+for ``dimension != dim`` and by the corresponding value in :attr:`index` for
+``dimension = dim``.
+
+For a 3-D tensor, :attr:`self` is updated as::
+
+ self[index[i][j][k]][j][k] += other[i][j][k] # if dim == 0
+ self[i][index[i][j][k]][k] += other[i][j][k] # if dim == 1
+ self[i][j][index[i][j][k]] += other[i][j][k] # if dim == 2
+
+:attr:`self`, :attr:`index` and :attr:`other` should have same number of
+dimensions. It is also required that ``index.size(d) <= other.size(d)`` for all
+dimensions ``d``, and that ``index.size(d) <= self.size(d)`` for all dimensions
+``d != dim``.
+
+Moreover, as for :meth:`~Tensor.gather`, the values of :attr:`index` must be
+between ``0`` and ``self.size(dim) - 1`` inclusive, and all values in a row along
+the specified dimension :attr:`dim` must be unique.
+
+Args:
+ dim (int): the axis along which to index
+ index (LongTensor): the indices of elements to scatter and add
+ other (Tensor): the source elements to scatter and add
+
+Example::
+
+ >>> x = torch.rand(2, 5)
+ >>> x
+ tensor([[0.7404, 0.0427, 0.6480, 0.3806, 0.8328],
+ [0.7953, 0.2009, 0.9154, 0.6782, 0.9620]])
+ >>> torch.ones(3, 5).scatter_add_(0, torch.tensor([[0, 1, 2, 0, 0], [2, 0, 0, 1, 2]]), x)
+ tensor([[1.7404, 1.2009, 1.9154, 1.3806, 1.8328],
+ [1.0000, 1.0427, 1.0000, 1.6782, 1.0000],
+ [1.7953, 1.0000, 1.6480, 1.0000, 1.9620]])
+
+""")
+
add_docstr_all('select',
r"""
select(dim, index) -> Tensor
diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py
--- a/torch/nn/modules/loss.py
+++ b/torch/nn/modules/loss.py
@@ -523,7 +523,7 @@ class BCEWithLogitsLoss(_Loss):
:math:`p_n > 1` increases the recall, :math:`p_n < 1` increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class,
- then `pos_weight` for the class should be equal to math:`\frac{300}{100}=3`.
+ then `pos_weight` for the class should be equal to :math:`\frac{300}{100}=3`.
The loss would act as if the dataset contains math:`3\times 100=300` positive examples.
Args:
diff --git a/torch/nn/modules/pooling.py b/torch/nn/modules/pooling.py
--- a/torch/nn/modules/pooling.py
+++ b/torch/nn/modules/pooling.py
@@ -691,7 +691,7 @@ def __init__(self, norm_type, kernel_size, stride=None, ceil_mode=False):
self.ceil_mode = ceil_mode
def extra_repr(self):
- return 'norm_type={norm_type}, kernel_size{kernel_size}, stride={stride}, ' \
+ return 'norm_type={norm_type}, kernel_size={kernel_size}, stride={stride}, ' \
'ceil_mode={ceil_mode}'.format(**self.__dict__)
| Segmentation fault with large dilated Conv2d
## Issue description
Pytorch 0.4.0 gives a segmentation fault with dilated 2D convolutions over a specific size.
## Code example
import torch
import torch.nn as nn
model = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=5, dilation=2)
model(torch.zeros([1, 64, 1166, 1166])) # This runs
model(torch.zeros([1, 64, 1167, 1167])) # This crashes
## System Info
PyTorch version: 0.4.0
Is debug build: No
CUDA used to build PyTorch: 9.1.85
OS: CentOS Linux 7 (Core)
GCC version: (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
CMake version: version 2.8.12.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 8.0.61
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: TITAN X (Pascal)
GPU 2: Quadro K2200
Nvidia driver version: 390.59
cuDNN version: Probably one of the following:
/usr/local/cuda-7.5/lib64/libcudnn.so.5.1.10
/usr/local/cuda-7.5/lib64/libcudnn.so.5.1.3
/usr/local/cuda-7.5/lib64/libcudnn_static.a
/usr/local/cuda-8.0/lib64/libcudnn.so.5.1.10
/usr/local/cuda-8.0/lib64/libcudnn_static.a
scatter_add_ function not in docs
This should be an easy one..
https://discuss.pytorch.org/t/how-to-implement-scatter-add/3876
The function is implemented, just no sign of it in the docs.
| @SsnL, do you think this is the same as the segfault you showed me a week ago on convolution?
@ezyang ah yes. int overflow. I only fixed conv3d because I didnβt want to hurt perf. Iβll fix all of them.
| 2018-07-22T15:58:23 |
|
pytorch/pytorch | 16,102 | pytorch__pytorch-16102 | [
"15330"
] | 72d27e38025c6be9c34a6a7307431635176e1ac6 | diff --git a/caffe2/python/__init__.py b/caffe2/python/__init__.py
--- a/caffe2/python/__init__.py
+++ b/caffe2/python/__init__.py
@@ -1,5 +1,8 @@
from __future__ import absolute_import, division, print_function, unicode_literals
from caffe2.proto import caffe2_pb2
+import os
+import sys
+import platform
# TODO: refactor & remove the following alias
caffe2_pb2.CPU = caffe2_pb2.PROTO_CPU
caffe2_pb2.CUDA = caffe2_pb2.PROTO_CUDA
@@ -10,3 +13,22 @@
caffe2_pb2.HIP = caffe2_pb2.PROTO_HIP
caffe2_pb2.COMPILE_TIME_MAX_DEVICE_TYPES = caffe2_pb2.PROTO_COMPILE_TIME_MAX_DEVICE_TYPES
caffe2_pb2.ONLY_FOR_TEST = caffe2_pb2.PROTO_ONLY_FOR_TEST
+
+if platform.system() == 'Windows':
+ # first get nvToolsExt PATH
+ def get_nvToolsExt_path():
+ NVTOOLEXT_HOME = os.getenv('NVTOOLSEXT_PATH', 'C:\\Program Files\\NVIDIA Corporation\\NvToolsExt')
+
+ if os.path.exists(NVTOOLEXT_HOME):
+ return NVTOOLEXT_HOME + '\\bin\\x64\\'
+ else:
+ return ''
+
+ py_dll_path = os.path.join(os.path.dirname(sys.executable), 'Library\\bin')
+ th_root = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), 'torch')
+ th_dll_path = th_root + '\\lib\\'
+
+ dll_paths = [th_dll_path, py_dll_path, get_nvToolsExt_path(), os.environ['PATH']]
+
+ # then add the path to env
+ os.environ['PATH'] = ';'.join(dll_paths)
| (VS2017) error C2397: conversion from 'long' to 'uint32_t' requires a narrowing conversion on Debug env
## π Bug
The same issue #14961(windows libtorch1.0 : cpp_export in Visual studio 2015: error C2397: conversion from 'long' to 'uint32_t' requires a narrowing conversion) was closed, but it happens the same on most recent Visual Studio 2017, version 15.9.4.
It only occurs on Debug environment, and runs fine on Release.
## To Reproduce
```
#pragma once
#include "torch/nn.h"
```
Detailed code explanation below
## Environment
- PyTorch Version (e.g., 1.0): 1.0
- OS (e.g., Linux): Windows 10 x64
- How you installed PyTorch (`conda`, `pip`, source): Libtorch
- CUDA/cuDNN version: 10, 7.3
- GPU models and configuration: 1050
- Any other relevant information:
## Additional context
According to MSDN, error C2397 happens when narrowing down occurs on uniform initialization.
So I digged down the code and found the code which causes this error.
```
// c10/util/Exception.h
// Line 49
Error(SourceLocation source_location, const std::string& msg);
// Line 120
#define AT_ERROR(...) \
throw ::c10::Error({__func__, __FILE__, __LINE__}, ::c10::str(__VA_ARGS__))
// c10/util/StringUtil.h
// Line 69
struct C10_API SourceLocation {
const char* function;
const char* file;
uint32_t line;
};
```
It seems that \_\_LINE\_\_ macro value is treated as long type, so it tries to narrow down to the 'uint32_t line' constructor parameter, which causes this error.
| 2019-01-17T05:43:53 |
||
pytorch/pytorch | 16,127 | pytorch__pytorch-16127 | [
"15749",
"11732",
"15464"
] | a5a34fb5b1381730a68fe87fe671b4992f1700b1 | diff --git a/torch/hub.py b/torch/hub.py
--- a/torch/hub.py
+++ b/torch/hub.py
@@ -124,7 +124,6 @@ def load(github, model, force_reload=False, *args, **kwargs):
# Download zipped code from github
url = _git_archive_link(repo_info, branch)
cached_file = os.path.join(hub_dir, branch + '.zip')
- extracted_repo = os.path.join(hub_dir, repo_name + '-' + branch)
repo_dir = os.path.join(hub_dir, repo_name + '_' + branch)
use_cache = (not force_reload) and os.path.exists(repo_dir)
@@ -136,13 +135,18 @@ def load(github, model, force_reload=False, *args, **kwargs):
sys.stderr.write('Using cache found in {}'.format(repo_dir))
else:
_remove_if_exists(cached_file)
- _remove_if_exists(extracted_repo)
- _remove_if_exists(repo_dir)
-
_download_url_to_file(url, cached_file)
- zipfile.ZipFile(cached_file).extractall(hub_dir)
+
+ cached_zipfile = zipfile.ZipFile(cached_file)
+
+ # Github renames folder repo-v1.x.x to repo-1.x.x
+ extraced_repo_name = cached_zipfile.infolist()[0].filename
+ extracted_repo = os.path.join(hub_dir, extraced_repo_name)
+ _remove_if_exists(extracted_repo)
+ cached_zipfile.extractall(hub_dir)
_remove_if_exists(cached_file)
+ _remove_if_exists(repo_dir)
shutil.move(extracted_repo, repo_dir) # rename the repo
sys.path.insert(0, repo_dir) # Make Python interpreter aware of the repo
diff --git a/torch/nn/modules/rnn.py b/torch/nn/modules/rnn.py
--- a/torch/nn/modules/rnn.py
+++ b/torch/nn/modules/rnn.py
@@ -222,7 +222,7 @@ def __setstate__(self, d):
@property
def _flat_weights(self):
- return list(self._parameters.values())
+ return [p for layerparams in self.all_weights for p in layerparams]
@property
def all_weights(self):
| diff --git a/aten/src/ATen/test/test_install/CMakeLists.txt b/aten/src/ATen/test/test_install/CMakeLists.txt
--- a/aten/src/ATen/test/test_install/CMakeLists.txt
+++ b/aten/src/ATen/test/test_install/CMakeLists.txt
@@ -3,6 +3,8 @@ find_package(ATen REQUIRED)
include_directories(${ATEN_INCLUDE_DIR})
# C++11
-set(CMAKE_CXX_FLAGS "--std=c++11 ${CMAKE_CXX_FLAGS}")
+if (not MSVC)
+ set(CMAKE_CXX_FLAGS "--std=c++11 ${CMAKE_CXX_FLAGS}")
+endif()
add_executable(main main.cpp)
target_link_libraries(main ${ATEN_LIBRARIES})
diff --git a/c10/test/util/intrusive_ptr_test.cpp b/c10/test/util/intrusive_ptr_test.cpp
--- a/c10/test/util/intrusive_ptr_test.cpp
+++ b/c10/test/util/intrusive_ptr_test.cpp
@@ -11,9 +11,11 @@ using c10::intrusive_ptr_target;
using c10::make_intrusive;
using c10::weak_intrusive_ptr;
+#ifndef _MSC_VER
#pragma GCC diagnostic ignored "-Wpragmas"
#pragma GCC diagnostic ignored "-Wunknown-warning-option"
#pragma GCC diagnostic ignored "-Wself-move"
+#endif
namespace {
class SomeClass0Parameters : public intrusive_ptr_target {};
diff --git a/test/cpp/api/serialize.cpp b/test/cpp/api/serialize.cpp
--- a/test/cpp/api/serialize.cpp
+++ b/test/cpp/api/serialize.cpp
@@ -215,7 +215,9 @@ TEST(SerializeTest, Optim) {
TEST(SerializeTest, XOR_CUDA) {
torch::manual_seed(0);
// We better be able to save and load a XOR model!
- auto getLoss = [](Sequential model, uint32_t batch_size, bool is_cuda=false) {
+ auto getLoss = [](Sequential model,
+ uint32_t batch_size,
+ bool is_cuda = false) {
auto inputs = torch::empty({batch_size, 2});
auto labels = torch::empty({batch_size});
if (is_cuda) {
@@ -269,3 +271,34 @@ TEST(SerializeTest, XOR_CUDA) {
loss = getLoss(model3, 100, true);
ASSERT_LT(loss.item<float>(), 0.1);
}
+
+TEST(
+ SerializeTest,
+ CanSerializeModulesWithIntermediateModulesWithoutParametersOrBuffers) {
+ struct C : torch::nn::Module {
+ C() {
+ register_buffer("foo", torch::ones(5, torch::kInt32));
+ }
+ };
+ struct B : torch::nn::Module {};
+ struct A : torch::nn::Module {
+ A() {
+ register_module("b", std::make_shared<B>());
+ register_module("c", std::make_shared<C>());
+ }
+ };
+ struct M : torch::nn::Module {
+ M() {
+ register_module("a", std::make_shared<A>());
+ }
+ };
+
+ auto out = std::make_shared<M>();
+ std::stringstream ss;
+ torch::save(out, ss);
+ auto in = std::make_shared<M>();
+ torch::load(in, ss);
+
+ const int output = in->named_buffers()["a.c.foo"].sum().item<int>();
+ ASSERT_EQ(output, 5);
+}
diff --git a/test/test_autograd.py b/test/test_autograd.py
--- a/test/test_autograd.py
+++ b/test/test_autograd.py
@@ -14,12 +14,14 @@
from torch.autograd.gradcheck import gradgradcheck, gradcheck
from torch.autograd.function import once_differentiable
from torch.autograd.profiler import profile
+from torch.utils.checkpoint import checkpoint
from common_utils import (TEST_MKL, TestCase, run_tests, skipIfNoLapack,
suppress_warnings, skipIfRocm,
prod_single_zero, random_square_matrix_of_rank,
random_symmetric_matrix, random_symmetric_psd_matrix,
random_symmetric_pd_matrix, make_nonzero_det,
random_fullrank_matrix_distinct_singular_value, load_tests)
+from common_cuda import TEST_CUDA
from torch.autograd import Variable, Function, detect_anomaly
from torch.autograd.function import InplaceFunction
from torch.testing import make_non_contiguous, randn_like
@@ -202,6 +204,16 @@ def compute_grad(create_graph):
x_grad, x_grad_clone = compute_grad(create_graph=True)
self.assertEqual(x_grad, x_grad_clone)
+ def test_sum_to_with_empty_dim_grad(self):
+ a = torch.rand(4, 0, requires_grad=True)
+ b = torch.rand(4, 1, requires_grad=True)
+ c = a + b
+ assert c.shape == (4, 0)
+ c.sum().backward()
+
+ self.assertEqual(b.grad, torch.zeros(4, 1))
+ self.assertEqual(a.grad, torch.zeros(4, 0))
+
def test_hessian_vector(self):
x = torch.randn(2, 2, requires_grad=True)
y = torch.randn(2, 2, requires_grad=True)
@@ -1464,19 +1476,24 @@ def test_ctc_loss(self):
gradcheck_input_size = 10
# device, input_length
- tests = [('cpu', 150)]
+ tests = [('cpu', 150, False),
+ ('cpu', 150, True)]
if torch.cuda.is_available():
- tests += [('cuda', 50),
- ('cuda', 150)]
+ tests += [('cuda', 50, False),
+ ('cuda', 150, False),
+ ('cuda', 50, True),
+ ('cuda', 150, True)]
- for device, input_length in tests:
+ for device, input_length, vary_lengths in tests:
targets = torch.randint(1, num_labels, (batch_size, target_length),
device=device, dtype=torch.long)
x = torch.randn(gradcheck_input_size, device=device, requires_grad=True)
tile_factors = torch.randn(input_length * batch_size * num_labels // gradcheck_input_size + 1,
device=device)
- input_lengths = [input_length for _ in range(batch_size)]
- target_lengths = [target_length for _ in range(batch_size)]
+ input_lengths = [(torch.randint(input_length // 2, input_length + 1, ()).item()
+ if vary_lengths or i == 0 else input_length) for i in range(batch_size)]
+ target_lengths = [(torch.randint(target_length // 2, target_length + 1, ()).item()
+ if vary_lengths or i == 0 else target_length) for i in range(batch_size)]
def ctc_after_softmax(x):
x_full = ((x[:, None] * tile_factors[None, :]).view(-1)[:input_length * batch_size * num_labels]
@@ -2704,6 +2721,36 @@ def f(inp):
gradcheck(f, torch.rand(10, dtype=torch.float64, requires_grad=True))
gradgradcheck(f, torch.rand(10, dtype=torch.float64, requires_grad=True))
+ @unittest.skipIf(not TEST_CUDA, "Requires cuda for multi device")
+ def test_multi_device_reentrant_autograd(self):
+ # Output on gpu so that this task will be associated with the gpu thread
+ def fn_on_gpu(inp):
+ # Artificially increase the priority of the next op to make sure it runs
+ # as soon as we reach it before the ops of branch1.
+ dummy = inp * 2 * 2 * 2 * 2
+ return inp.cuda()
+
+ def parent_on_cpu(inp):
+ # Slow branch of ops on gpu so that the work queue for the gpu thread
+ # won't empty too quickly. They also have smaller priorities than the
+ # ones created by fn_on_gpu
+ branch1 = inp.cuda()
+ branch1 = branch1 / branch1
+ branch1 = branch1 / branch1
+ branch1 = branch1 / branch1
+ # Perform checkpoint on cpu tensors. So the last op performed in the reentrant
+ # autograd is an AccumulateGrad that runs on the cpu thread for the gpu thread.
+ # So the cpu thread will notify the gpu thread with an empty FunctionTask.
+ branch2 = checkpoint(fn_on_gpu, inp)
+ out = branch2 + branch1
+ return out
+
+ inp = torch.rand(2, requires_grad=True)
+ out = parent_on_cpu(inp)
+ # This will segfault if the empty FunctionTask is not handled properly in the
+ # gpu thread ReadyQueue
+ out.sum().backward()
+
def index_variable(shape, max_indices):
if not isinstance(shape, tuple):
diff --git a/test/test_indexing.py b/test/test_indexing.py
--- a/test/test_indexing.py
+++ b/test/test_indexing.py
@@ -45,6 +45,12 @@ def test_byte_mask(self):
v = torch.tensor([1.])
self.assertEqual(v[v == 0], torch.tensor([]))
+ def test_byte_mask_accumulate(self):
+ mask = torch.zeros(size=(10, ), dtype=torch.uint8)
+ y = torch.ones(size=(10, 10))
+ y.index_put_((mask, ), y[mask], accumulate=True)
+ self.assertEqual(y, torch.ones(size=(10, 10)))
+
def test_multiple_byte_mask(self):
v = torch.randn(5, 7, 3)
# note: these broadcast together and are transposed to the first dim
diff --git a/test/test_nn.py b/test/test_nn.py
--- a/test/test_nn.py
+++ b/test/test_nn.py
@@ -4321,6 +4321,7 @@ def test_cudnn_weight_format(self):
self.assertEqual(len(w), 1)
self.assertIn('weights are not part of single contiguous chunk of memory', w[0].message.args[0])
first_warn = False
+ warnings.resetwarnings()
output_noncontig[0].sum().backward()
grads_noncontig = [v.grad.data.clone() for v in all_vars]
for v in all_vars:
@@ -4721,6 +4722,28 @@ def compare_cpu_gpu(outputs_cpu, outputs_gpu):
def test_RNN_cpu_vs_cudnn_no_dropout(self):
self._test_RNN_cpu_vs_cudnn(0)
+ @unittest.skipIf(not TEST_CUDNN, "needs cudnn")
+ @skipIfRocm
+ def test_RNN_cudnn_weight_norm(self):
+ input_size = 10
+ hidden_size = 6
+ num_layers = 2
+ seq_length = 7
+ batch = 6
+ m = nn.LSTM(input_size, hidden_size, num_layers).cuda()
+ input = torch.randn(seq_length, batch, input_size).cuda()
+ expected_output = m(input)
+ # add weight normalization
+ name = 'weight_hh_l0'
+ m = torch.nn.utils.weight_norm(m, name=name)
+ # otherwise, subsequent warnings will be hidden, and further tests rely on them
+ warnings.simplefilter("always")
+ self.assertEqual(m(input), expected_output)
+
+ # remove weight norm
+ m = torch.nn.utils.remove_weight_norm(m, name=name)
+ self.assertEqual(m(input), expected_output)
+
@unittest.skipIf(not (TEST_CUDNN and TEST_CUDNN_VERSION >= 5103), "needs cudnn >= 5.1")
@default_tensor_type(torch.FloatTensor) # FIXME: just until torch.cuda.DoubleTensor.sum() implemented
def test_RNN_cpu_vs_cudnn_with_dropout(self):
@@ -6283,6 +6306,12 @@ def func(*inputs):
dummy_out = func(*inputs)
grad_y = torch.randn_like(dummy_out, device=device, dtype=dtype, requires_grad=True)
+ # Issue #15353: test mkldnn double backward, don't run gradgradcheck due
+ # to imprecision issues
+ if dtype == torch.float:
+ g, = torch.autograd.grad(dummy_out.sum(), x, create_graph=True)
+ return g.requires_grad
+
return gradgradcheck(func, inputs, (grad_y,))
def test_conv_double_backward(self):
@@ -6291,20 +6320,22 @@ def test_conv_double_backward(self):
for stride, padding, chan_in, chan_out, dilation in \
product([1, 2], [0, 1, 2], [2], [3], dilations):
for no_weight in (True, False):
- result = self.run_conv_double_back_test(kern, stride,
- padding, chan_in, chan_out,
- batch_size, inp_size, dilation,
- no_weight)
- self.assertTrue(result,
- "Conv double backward test failed with parameters:" +
- "\nkern: " + str(kern) +
- "\nstride: " + str(stride) +
- "\npadding: " + str(padding) +
- "\nchan_in: " + str(chan_in) +
- "\nchan_out: " + str(chan_out) +
- "\nbatch_size: " + str(batch_size) +
- "\ninp_size: " + str(inp_size) +
- "\ndilation: " + str(dilation))
+ for dtype in (torch.float, torch.double):
+ result = self.run_conv_double_back_test(kern, stride,
+ padding, chan_in, chan_out,
+ batch_size, inp_size, dilation,
+ no_weight, dtype=dtype)
+ self.assertTrue(result,
+ "Conv double backward test failed with parameters:" +
+ "\nkern: " + str(kern) +
+ "\nstride: " + str(stride) +
+ "\npadding: " + str(padding) +
+ "\nchan_in: " + str(chan_in) +
+ "\nchan_out: " + str(chan_out) +
+ "\nbatch_size: " + str(batch_size) +
+ "\ninp_size: " + str(inp_size) +
+ "\ndilation: " + str(dilation) +
+ "\ndtype: " + str(dtype))
def test_conv_double_backward_no_bias(self):
kern = 3
diff --git a/test/test_torch.py b/test/test_torch.py
--- a/test/test_torch.py
+++ b/test/test_torch.py
@@ -843,7 +843,7 @@ def test_min_with_inf(self):
@staticmethod
def _test_norm(self, device):
# full reduction
- x = torch.randn(5, device=device)
+ x = torch.randn(25, device=device)
xn = x.cpu().numpy()
for p in [0, 1, 2, 3, 4, inf, -inf]:
res = x.norm(p).item()
@@ -851,7 +851,7 @@ def _test_norm(self, device):
self.assertEqual(res, expected, "full reduction failed for {}-norm".format(p))
# one dimension
- x = torch.randn(5, 5, device=device)
+ x = torch.randn(25, 25, device=device)
xn = x.cpu().numpy()
for p in [0, 1, 2, 3, 4, inf, -inf]:
res = x.norm(p, 1).cpu().numpy()
@@ -866,6 +866,9 @@ def _test_norm(self, device):
self.assertEqual(res.shape, expected.shape)
self.assertTrue(np.allclose(res, expected), "dim reduction failed for {}-norm".format(p))
+ # larger tensor sanity check
+ self.assertEqual(2 * torch.norm(torch.ones(10000)), torch.norm(torch.ones(40000)))
+
@unittest.skipIf(not TEST_NUMPY, "Numpy not found")
@skipIfNoLapack
def test_norm(self):
@@ -1396,6 +1399,14 @@ def _test_neg(self, cast):
def test_neg(self):
self._test_neg(self, lambda t: t)
+ def test_threshold(self):
+ for dtype in torch.testing.get_all_dtypes():
+ if dtype != torch.uint8 and dtype != torch.float16:
+ # 100 is wide enough to use AVX2 instructions for all types
+ x = torch.randn(100).sign().to(dtype=dtype)
+ y = torch.threshold(x, 0, 0)
+ self.assertTrue(y.le(0).any())
+
def test_reciprocal(self):
a = torch.randn(100, 89)
res_div = 1 / a
@@ -3484,10 +3495,14 @@ def assertIsOrdered(self, order, x, mxx, ixx, task):
SIZE = 4
if order == 'descending':
def check_order(a, b):
- return a >= b
+ # `a != a` because we put NaNs
+ # at the end of ascending sorted lists,
+ # and the beginning of descending ones.
+ return a != a or a >= b
elif order == 'ascending':
def check_order(a, b):
- return a <= b
+ # see above
+ return b != b or a <= b
else:
error('unknown order "{}", must be "ascending" or "descending"'.format(order))
@@ -3562,6 +3577,17 @@ def test_sort(self):
# Test that we still have proper sorting with duplicate keys
self.assertIsOrdered('descending', x, res2val, res2ind, 'random with duplicate keys')
+ # Test sorting with NaNs
+ x = torch.rand(SIZE, SIZE)
+ x[1][2] = float('NaN')
+ x[3][0] = float('NaN')
+ torch.sort(x, out=(res2val, res2ind))
+ self.assertIsOrdered('ascending', x, res2val, res2ind,
+ 'random with NaNs')
+ torch.sort(x, out=(res2val, res2ind), descending=True)
+ self.assertIsOrdered('descending', x, res2val, res2ind,
+ 'random with NaNs')
+
@unittest.skipIf(not TEST_NUMPY, 'Numpy not found')
def test_tensordot(self):
devices = ['cpu'] if not torch.cuda.is_available() else ['cpu', 'cuda']
| Dynamic GRU weight
## π Bug
For weight norm or pruning, this library needs to support dynamically updating the weight tensors. My might attempt to do so that brought up several interesting stack traces:
## To Reproduce
Run this script:
```python3
import torch
# Constants
size = 16
batch_size = 4
seq_len = 8
device = torch.device('cuda')
input_ = torch.randn(seq_len, batch_size, size).to(device)
hidden = torch.randn(1, batch_size, size).to(device)
gru = torch.nn.GRU(size, size).to(device)
# Update weight with a `torch.tensor`
# NOTE: Similar weight update as torch.nn.utils.weight_nrom
data = gru.weight_hh_l0.data
del gru._parameters['weight_hh_l0']
setattr(gru, 'weight_hh_l0', torch.tensor(data))
# Optional call to resolve parameter shapes
gru.flatten_parameters()
# Run forward pass
_, output = gru(input_, hidden)
```
With out ``gru.flatten_parameters``:
```
UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
setattr(gru, 'weight_hh_l0', torch.tensor(data))
Traceback (most recent call last):
File "ddd.py", line 15, in <module>
_, output = gru(input_, hidden)
File "/home/michaelp/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/michaelp/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 179, in forward
self.dropout, self.training, self.bidirectional, self.batch_first)
RuntimeError: num_ptrs == (num_parameters * (has_biases ? 1 : 2)) ASSERT FAILED at /pytorch/aten/src/ATen/native/cudnn/RNN.cpp:1190, please report a bug to PyTorch.
```
With ``gru.flatten_parameters``:
```
UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
setattr(gru, 'weight_hh_l0', torch.tensor(data))
Traceback (most recent call last):
File "ddd.py", line 14, in <module>
gru.flatten_parameters()
File "/home/michaelp/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 113, in flatten_parameters
self.batch_first, bool(self.bidirectional))
RuntimeError: MatrixRef: ArrayRef size 3 not divisible by stride 4
```
## Expected behavior
That I can update the GRU weight with a new `torch.tensor`, without an issue.
## Environment
```
Collecting environment information...
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-16ubuntu3) 7.3.0
CMake version: version 3.10.2
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
Nvidia driver version: 390.30
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.1.3
Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect
```
Segfault in dataparallel + checkpoint
Reproducible on master. Reported at https://discuss.pytorch.org/t/segmentation-fault-when-using-checkpoint-and-dataparallel/25247
```py
import torch
import torch.nn as nn
from torch import optim
import torch.utils.checkpoint as chk
import torch.nn.functional as F
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.blocks = nn.ModuleDict()
self.conv0 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=True)
self.blocks['0'] = self.conv0
self.conv1 = nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1, bias=True)
self.blocks['1'] = self.conv1
self.conv2 = nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1, bias=True)
self.blocks['2'] = self.conv2
self.conv3 = nn.Conv2d(64, 20, kernel_size=3, stride=1, padding=1, bias=True)
self.blocks['3'] = self.conv3
def forward(self, x):
x = self.blocks['0'](x)
#x1 = self.blocks['1'](x)
x1 = chk.checkpoint(self.conv1,x)
#x2 = self.blocks['2'](x)
x2 = chk.checkpoint(self.conv2,x)
x = torch.cat((x1,x2),1)
x = self.blocks['3'](x)
return x
test_model = model()
test_model = nn.DataParallel(test_model)
test_model = test_model.cuda()
loss = nn.MSELoss()
optimizer = optim.SGD(test_model.module.parameters(), lr = 0.01)
for i in range(100):
print(i)
data = torch.rand(4, 3, 15,15)
labels = torch.rand(4,20, 15,15).cuda()
test_preds = test_model(data)
optimizer.zero_grad()
test_loss = loss(test_preds, labels)
test_loss.backward()
optimizer.step()
print('Finished')
```
Author of the post also provided GDB trace
```
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff649ff700 (LWP 39737)]
std::__push_heap<__gnu_cxx::__normal_iterator<torch::autograd::FunctionTask*, std::vector<torch::autograd::FunctionTask> >, long, torch::autograd::FunctionTask, __gnu_cxx::__ops::_Iter_comp_val<torch::autograd::CompareFunctionTaskTime> > (__first=..., __holeIndex=1, __topIndex=__topIndex@entry=0, __value=..., __comp=...) at /opt/rh/devtoolset-3/root/usr/include/c++/4.9.2/bits/stl_heap.h:129
129 /opt/rh/devtoolset-3/root/usr/include/c++/4.9.2/bits/stl_heap.h: No such file or directory.
```
torch.eq_() get wrong result in 1.0.0
## π Bug
<!-- tensor.eq_() behave like "less than", but not "equal"-->
Steps to reproduce the behavior:
>>> a = torch.Tensor([1,2,3])
>>> b = torch.Tensor([2,2,2])
>>> c = a.eq(b) # get correct result in v1.0.0
>>> c
tensor([0, 1, 0], dtype=torch.uint8) # expected behavior
>>> a.eq_(b) # get wrong result in v1.0.0. correct in v0.4.0
tensor([0., 1., 1.])
- PyTorch Version (e.g., 1.0): stable 1.0 installed from "conda install pytorch torchvision -c pytorch"
- OS (e.g., Linux): Ubuntu 16.04
- Python version: python 3.7
|
Hi, do you guys have any workaround to solve this problem?
Have the same seg fault, both on 0.41 and nightly built. Keep waiting...
+1
> Hi, do you guys have any workaround to solve this problem?
According to the link above, you can warp the whold model to prevent it. Didn't test, I split the model to different gpus though it would be slower...
i think the problem is here
https://github.com/pytorch/pytorch/blob/f52f68bcf9fa490578539fd42c3e050bd5a5f68a/aten/src/ATen/native/LegacyDefinitions.cpp#L114 | 2019-01-17T21:23:52 |
pytorch/pytorch | 35,405 | pytorch__pytorch-35405 | [
"35213"
] | 3e332778b46eae8d97305308030c4f334eb43c82 | diff --git a/tools/pyi/gen_pyi.py b/tools/pyi/gen_pyi.py
--- a/tools/pyi/gen_pyi.py
+++ b/tools/pyi/gen_pyi.py
@@ -176,7 +176,7 @@ def arg_to_type_hint(arg):
binary_ops = ('add', 'sub', 'mul', 'div', 'pow', 'lshift', 'rshift', 'mod', 'truediv',
- 'matmul', 'floordiv', 'floor_divide'
+ 'matmul', 'floordiv', 'floor_divide',
'radd', 'rsub', 'rmul', 'rtruediv', 'rfloordiv', 'rpow', # reverse arithmetic
'and', 'or', 'xor', # logic
'iadd', 'iand', 'idiv', 'ilshift', 'imul',
| Tensor __radd__ type hint issue
## π Bug
Type checking any code adding a float or integer to a tensor (`__radd__` method) results in a type checking error when using nightly build (but not in release). This is causing type checking failures for downstream applications, particularly in Captum.
## To Reproduce
Run mypy on any code applying radd, such as:
```
import torch
ten = torch.tensor([1.0, 2.0, 3.0])
print(7 + ten)
```
Mypy Error:
`test_file.py:4: error: Unsupported operand types for + ("int" and "Tensor")
`
## Expected behavior
No mypy error should occur, as long as the `__radd__` method is appropriately type hinted.
## Additional context
It seems like the problem is caused by a missing comma in this PR: #34552
In tools/pyi/gen_pyi.pi, there should be a comma after 'floor_divide', right now, without a comma, it is concatenating the string with 'radd' and just generating a type hint for an invalid method 'floor_divider_add'
cc @ezyang
| I tried to reproduce on master:
```
mypy -c "import torch; ten = torch.tensor([1.0, 2.0, 3.0]); print(ten + 7)"
Success: no issues found in 1 source file
```
@vivekmig can you please specify exact steps to reproduce?
@pbelevich Whoops, sorry had a typo in the example, should be the reverse, `print(7 + ten)`. Fixed now.
@vivekmig I was able to reproduce, thanks! | 2020-03-25T19:48:39 |
|
pytorch/pytorch | 36,286 | pytorch__pytorch-36286 | [
"31468"
] | bad005d33197fc014e5889b6438ac1f3ecd73298 | diff --git a/torch/nn/quantized/dynamic/modules/rnn.py b/torch/nn/quantized/dynamic/modules/rnn.py
--- a/torch/nn/quantized/dynamic/modules/rnn.py
+++ b/torch/nn/quantized/dynamic/modules/rnn.py
@@ -1,12 +1,14 @@
from __future__ import absolute_import, division, print_function, unicode_literals
+from collections import OrderedDict
+import numbers
+
import torch
import torch.nn as nn
from torch import Tensor # noqa: F401
from torch import _VF
from torch._jit_internal import Tuple, Optional, List # noqa: F401
from torch.nn.utils.rnn import PackedSequence
-import numbers
def apply_permutation(tensor, permutation, dim=1):
@@ -27,6 +29,34 @@ def __setstate__(self, state):
self.param = torch.ops.quantized.linear_prepack(*state[0])
self.training = state[1]
+ def _save_to_state_dict(self, destination, prefix, keep_vars):
+ super(PackedParameter, self)._save_to_state_dict(destination, prefix,
+ keep_vars)
+ (w, b) = self.unpack()
+
+ destination[prefix + 'weight'] = w
+ destination[prefix + 'bias'] = b
+
+ def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
+ missing_keys, unexpected_keys, error_msgs):
+ weight = state_dict[prefix + 'weight']
+ bias = state_dict[prefix + 'bias']
+ self.param = torch.ops.quantized.linear_prepack(weight, bias)
+ state_dict.pop(prefix + 'weight')
+ state_dict.pop(prefix + 'bias')
+
+ super(PackedParameter, self)._load_from_state_dict(state_dict, prefix,
+ local_metadata,
+ False, missing_keys,
+ unexpected_keys,
+ error_msgs)
+
+ def __repr__(self):
+ return repr(self.unpack())
+
+ def unpack(self):
+ return torch.ops.quantized.linear_unpack(self.param)
+
# This only exists because there's a bug in recursive scripting
# that arises only in Python 2 where a recursively scripted
# module does not have a forward(). We can delete this once we
@@ -150,6 +180,36 @@ def extra_repr(self):
s += ', bidirectional={bidirectional}'
return s.format(**self.__dict__)
+ def __repr__(self):
+ # We don't want to show `ModuleList` children, hence custom
+ # `__repr__`. This is the same as nn.Module.__repr__, except the check
+ # for the `PackedParameter` and `nn.ModuleList`.
+ # You should still override `extra_repr` to add more info.
+ extra_lines = []
+ extra_repr = self.extra_repr()
+ # empty string will be split into list ['']
+ if extra_repr:
+ extra_lines = extra_repr.split('\n')
+ child_lines = []
+ for key, module in self._modules.items():
+ if isinstance(module, (PackedParameter, nn.ModuleList)):
+ continue
+ mod_str = repr(module)
+ mod_str = nn.modules.module._addindent(mod_str, 2)
+ child_lines.append('(' + key + '): ' + mod_str)
+ lines = extra_lines + child_lines
+
+ main_str = self._get_name() + '('
+ if lines:
+ # simple one-liner info, which most builtin Modules will use
+ if len(extra_lines) == 1 and not child_lines:
+ main_str += extra_lines[0]
+ else:
+ main_str += '\n ' + '\n '.join(lines) + '\n'
+
+ main_str += ')'
+ return main_str
+
def check_input(self, input, batch_sizes):
# type: (Tensor, Optional[Tensor]) -> None
expected_input_dim = 2 if batch_sizes is not None else 3
@@ -193,6 +253,13 @@ def permute_hidden(self, hx, permutation):
return hx
return apply_permutation(hx, permutation)
+ @property
+ def all_weights(self):
+ result = OrderedDict()
+ for idx, name in enumerate(self._all_weight_names):
+ result[name] = self._all_weight_values[idx].unpack()
+ return result
+
@classmethod
def from_float(cls, mod):
assert type(mod) == torch.nn.LSTM, 'nn.quantized.dynamic.RNNBase.from_float only works for nn.LSTM'
diff --git a/torch/nn/quantized/modules/linear.py b/torch/nn/quantized/modules/linear.py
--- a/torch/nn/quantized/modules/linear.py
+++ b/torch/nn/quantized/modules/linear.py
@@ -80,6 +80,10 @@ def __setstate__(self, state):
self.set_weight_bias(state[0], state[1])
self.training = state[2]
+ def __repr__(self):
+ return self._weight_bias().__repr__()
+
+
class Linear(torch.nn.Module):
r"""
A quantized linear module with quantized tensor as inputs and outputs.
@@ -142,6 +146,36 @@ def extra_repr(self):
self.in_features, self.out_features, self.scale, self.zero_point, self.weight().qscheme()
)
+ def __repr__(self):
+ # We don't want to show `LinearPackedParams` children, hence custom
+ # `__repr__`. This is the same as nn.Module.__repr__, except the check
+ # for the `LinearPackedParams`.
+ # You should still override `extra_repr` to add more info.
+ extra_lines = []
+ extra_repr = self.extra_repr()
+ # empty string will be split into list ['']
+ if extra_repr:
+ extra_lines = extra_repr.split('\n')
+ child_lines = []
+ for key, module in self._modules.items():
+ if isinstance(module, LinearPackedParams):
+ continue
+ mod_str = repr(module)
+ mod_str = _addindent(mod_str, 2)
+ child_lines.append('(' + key + '): ' + mod_str)
+ lines = extra_lines + child_lines
+
+ main_str = self._get_name() + '('
+ if lines:
+ # simple one-liner info, which most builtin Modules will use
+ if len(extra_lines) == 1 and not child_lines:
+ main_str += extra_lines[0]
+ else:
+ main_str += '\n ' + '\n '.join(lines) + '\n'
+
+ main_str += ')'
+ return main_str
+
def forward(self, x):
return torch.ops.quantized.linear(
x, self._packed_params._packed_params, self.scale, self.zero_point)
| Can't get DynamicQuantizedLSTM weight and bias
## π Bug
when I run https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html
can't get DynamicQuantizedLSTM weight , bias , scale, zero_point
```
>>print(quantized_model.rnn.state_dict())
OrderedDict()
>>print(quantized_model.state_dict())
OrderedDict([('encoder.weight', tensor([[-0.2349, 0.4934, -0.3151, ..., 0.4456, 0.4912, 0.1553],
[-0.0710, 0.5101, -0.2940, ..., 0.1747, 0.5764, 0.3247],
[-0.0229, 0.0302, 0.0874, ..., 0.0390, 0.0736, -0.0558],
...,
[-0.0049, -0.0769, -0.0859, ..., -0.0870, 0.0463, 0.0657],
[ 0.0656, -0.0467, 0.0178, ..., 0.0973, 0.0566, -0.0561],
[ 0.0098, 0.0591, -0.0863, ..., -0.0715, -0.0329, 0.0221]])), ('decoder.weight', tensor([[-0.2003, 0.2629, 0.3254, ..., 0.4631, -0.3630, -0.0125],
[-0.1502, 0.0501, 0.1627, ..., 0.5132, -0.0626, -0.3380],
[ 0.0000, 0.0125, 0.0626, ..., -0.1001, 0.0626, -0.0501],
...,
[-0.0751, 0.0501, 0.0626, ..., 0.0000, 0.0751, -0.1252],
[ 0.0501, -0.0876, -0.0501, ..., 0.0751, -0.0626, -0.0751],
[ 0.0376, 0.0876, 0.0626, ..., -0.0250, 0.0876, -0.0626]],
size=(33278, 256), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.012516804970800877,
zero_point=0)), ('decoder.scale', tensor(1.)), ('decoder.zero_point', tensor(0)), ('decoder.bias', tensor([ 5.6618, 4.8450, 0.1365, ..., -0.3657, -0.3338, -0.3476],
requires_grad=True))])
```
- PyTorch Version : 1.3.1
- OS (e.g., Linux): Ubuntu
- How you installed PyTorch : conda
- Build command you used (if compiling from source): conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
- Python version: 3.7.5
cc @jerryzh168 @jianyuh @dzhulgakov @raghuramank100 @jamesr66a
| Looking into it. Meanwhile, you can get the parameters by running
```
for param in quantized_model.rnn._all_weight_values:
print(torch.ops.quantized.linear_unpack(param.param))
```
After #31540 lands you can see the weight values after running `state_dict()` on it. You can also access all the weights using `quantized_model.rnn.all_weights` | 2020-04-09T00:14:36 |
|
pytorch/pytorch | 46,422 | pytorch__pytorch-46422 | [
"45860"
] | 03ed8cbf586bd99181d5a3fe9b61760c83dbdf62 | diff --git a/torch/jit/_state.py b/torch/jit/_state.py
--- a/torch/jit/_state.py
+++ b/torch/jit/_state.py
@@ -61,7 +61,6 @@ def enable():
_script_classes = {}
def _add_script_class(cls, name):
- cls.__torch_script_class__ = True
global _script_classes
_script_classes[name] = cls
diff --git a/torch/jit/annotations.py b/torch/jit/annotations.py
--- a/torch/jit/annotations.py
+++ b/torch/jit/annotations.py
@@ -6,6 +6,7 @@
from .._jit_internal import List, Tuple, is_tuple, is_list, Dict, is_dict, Optional, \
is_optional, _qualified_name, Any, Future, is_future, is_ignored_fn
from .._jit_internal import BroadcastingList1, BroadcastingList2, BroadcastingList3 # type: ignore
+from ._state import _get_script_class
from torch._C import TensorType, TupleType, FloatType, IntType, \
ListType, StringType, DictType, BoolType, OptionalType, ClassType, InterfaceType, AnyType, NoneType, \
@@ -316,16 +317,18 @@ def try_ann_to_type(ann, loc):
if ann is torch.dtype:
return IntType.get() # dtype not yet bound in as its own type
if inspect.isclass(ann) and issubclass(ann, enum.Enum):
- if not hasattr(ann, "__torch_script_class__"):
+ qualified_name = _qualified_name(ann)
+ if _get_script_class(qualified_name) is None:
torch.jit._script._recursive_compile_class(ann, loc)
return EnumType(_qualified_name(ann), get_enum_value_type(ann, loc), list(ann))
if inspect.isclass(ann):
- if hasattr(ann, "__torch_script_class__"):
- return ClassType(_qualified_name(ann))
+ qualified_name = _qualified_name(ann)
+ if _get_script_class(qualified_name) is not None:
+ return ClassType(qualified_name)
ignored_builtin_classes = (torch.nn.Module, tuple, list, Exception)
if torch._jit_internal.can_compile_class(ann) and not issubclass(ann, ignored_builtin_classes):
torch.jit._script._recursive_compile_class(ann, loc)
- return ClassType(_qualified_name(ann))
+ return ClassType(qualified_name)
# Maybe resolve a NamedTuple to a Tuple Type
def fake_rcb(key):
| diff --git a/test/jit/test_class_type.py b/test/jit/test_class_type.py
--- a/test/jit/test_class_type.py
+++ b/test/jit/test_class_type.py
@@ -6,6 +6,7 @@
import torch
import torch.nn as nn
from torch.testing import FileCheck
+from typing import Any
# Make the helper files in test/ importable
pytorch_test_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
@@ -445,6 +446,39 @@ class Derived(Base):
def two(self, x):
return x + self.b + 2
+
+ def test_class_inheritance_implicit(self):
+ """
+ Test that inheritance is detected in
+ implicit scripting codepaths (e.g. try_ann_to_type).
+ """
+ class A:
+ def __init__(self, t):
+ self.t = t
+
+ @staticmethod
+ def f(a: torch.Tensor):
+ return A(a + 1)
+
+ class B(A):
+ def __init__(self, t):
+ self.t = t + 10
+
+ @staticmethod
+ def f(a: torch.Tensor):
+ return A(a + 1)
+
+ x = A(torch.tensor([3]))
+
+ def fun(x: Any):
+ if isinstance(x, A):
+ return A.f(x.t)
+ else:
+ return B.f(x.t)
+
+ with self.assertRaisesRegex(RuntimeError, "Tried to access nonexistent attribute or method"):
+ sc = torch.jit.script(fun)
+
@unittest.skipIf(IS_SANDCASTLE, "Importing like this doesn't work in fbcode")
def test_imported_classes(self):
import jit._imported_class_test.foo
diff --git a/torch/testing/_internal/jit_utils.py b/torch/testing/_internal/jit_utils.py
--- a/torch/testing/_internal/jit_utils.py
+++ b/torch/testing/_internal/jit_utils.py
@@ -55,6 +55,7 @@ def do_input_map(fn, input):
def clear_class_registry():
torch._C._jit_clear_class_registry()
torch.jit._recursive.concrete_type_store = torch.jit._recursive.ConcreteTypeStore()
+ torch.jit._state._script_classes.clear()
def get_execution_plan(graph_executor_state):
execution_plans = list(graph_executor_state.execution_plans.values())
| torch.jit.script segfault
## π Bug
Segfault on master (built Oct 4) with the following code (which is perhaps not expected to be scriptable):
```python
from typing import Any
import torch
class A:
def __init__(self, t):
self.t = t
@staticmethod
def f(a: torch.Tensor):
return A(a + 1)
class B(A):
def __init__(self, t):
self.t = t + 10
@staticmethod
def f(a: torch.Tensor):
return A(a + 1)
x = A(torch.tensor([3]))
def fun(x: Any):
if isinstance(x, A):
return A.f(x.t)
else:
return B.f(x.t)
print(torch.__version__)
sc = torch.jit.script(fun)
```
cc @ezyang @gchanan @zou3519 @gmagogsfm
| I can repro, taking a look at it now.
Took an initial look, error is coming from an `try_ann_to_type` call and deep in pybind. Assigning to @SplitInfinity, who knows more about scripting classes.
The issue here is that `B` extends `A`, and so in `try_ann_to_type`, TorchScript sees a `__torch_script_class__` attribute on `B` that was attached to `A`, thinks that the class has already been scripted, and tries to look up the JIT type by name in the compilation unit type registry.
is this fixed?
#45940 is waiting for review with all comments addressed and all tests passing. I forgot to re-request review last week after pushing new changes, oops. | 2020-10-15T21:26:51 |
pytorch/pytorch | 53,690 | pytorch__pytorch-53690 | [
"53507"
] | ad8d1b2aaaf2ba28c51b1cb38f86311749eff755 | diff --git a/torch/onnx/symbolic_opset11.py b/torch/onnx/symbolic_opset11.py
--- a/torch/onnx/symbolic_opset11.py
+++ b/torch/onnx/symbolic_opset11.py
@@ -136,6 +136,10 @@ def index_put(g, self, indices_list_value, values, accumulate=False):
sub_data_shape = sym_help._slice_helper(
g, g.op("Shape", self), axes=[0], starts=[len(indices_list)], ends=[maxsize])
values_shape = g.op("Concat", broadcast_index_shape, sub_data_shape, axis_i=0)
+ # Check if values is a singular value and expand accordingly
+ rank = sym_help._get_tensor_rank(values)
+ if rank is not None and rank == 0:
+ values = expand(g, values, values_shape, None)
values = g.op("Reshape", values, values_shape)
if accumulate:
| diff --git a/test/onnx/test_pytorch_onnx_onnxruntime.py b/test/onnx/test_pytorch_onnx_onnxruntime.py
--- a/test/onnx/test_pytorch_onnx_onnxruntime.py
+++ b/test/onnx/test_pytorch_onnx_onnxruntime.py
@@ -1754,6 +1754,26 @@ def forward(self, x, ind, update):
update = torch.ones(4)
self.run_test(IndexPutModel(), (x, ind, update))
+ @skipIfUnsupportedMinOpsetVersion(11)
+ def test_index_put_singular(self):
+ class IndexPutBoolModel(torch.nn.Module):
+ def forward(self, mask, indices):
+ mask[indices] = True
+ return mask
+
+ mask = torch.zeros(100, dtype=torch.bool)
+ indices = (torch.rand(25) * mask.shape[0]).to(torch.int64)
+ self.run_test(IndexPutBoolModel(), (mask, indices))
+
+ class IndexPutFloatModel(torch.nn.Module):
+ def forward(self, mask, indices):
+ mask[indices] = torch.tensor(5.5)
+ return mask
+
+ mask = torch.rand(100, dtype=torch.float)
+ indices = (torch.rand(50) * mask.shape[0]).to(torch.int64)
+ self.run_test(IndexPutFloatModel(), (mask, indices))
+
@skipIfUnsupportedMinOpsetVersion(11)
def test_index_put_accumulate(self):
class IndexPutModel(torch.nn.Module):
| Tensor indexing issue in onnx
## π Bug
This is a sibling issue for https://github.com/microsoft/onnxruntime/issues/6910 as they suggested to report here too.
It seems that Tensor indexing is not fully supported once exported to ONNX:
## To Reproduce
```py
import io
import torch
from torch import Tensor
import onnxruntime
def f() -> Tensor:
mask = torch.zeros(100, dtype=torch.bool)
indices = (torch.rand(25) * mask.shape[0]).to(torch.int64)
mask[indices] = True # offending line
return mask
class Module(torch.nn.Module):
def forward(self, *args, **kwargs):
return f()
model = Module()
model.eval()
model() # works fine
onnx_io = io.BytesIO()
torch.onnx.export(model, [], onnx_io, opset_version=11)
ort_session = onnxruntime.InferenceSession(onnx_io.getvalue())
ort_outs = ort_session.run(None, {}) # errors
```
```
/Users/nicolashug/opt/miniconda3/envs/pt/lib/python3.8/site-packages/torch/onnx/utils.py:347: UserWarning: No input args
warnings.warn("No input args")
2021-03-05 14:58:13.338019 [E:onnxruntime:, inference_session.cc:1293 operator()] Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{}, requested shape:{25}
Traceback (most recent call last):
File "lol.py", line 24, in <module>
ort_session = onnxruntime.InferenceSession(onnx_io.getvalue())
File "/Users/nicolashug/opt/miniconda3/envs/pt/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__
self._create_inference_session(providers, provider_options)
File "/Users/nicolashug/opt/miniconda3/envs/pt/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 312, in _create_inference_session
sess.initialize_session(providers, provider_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{}, requested shape:{25}
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Environment
Collecting environment information...
PyTorch version: 1.8.0a0+ad7d208
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 10.15.7 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.29)
CMake version: version 3.18.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] torch==1.8.0a0
[pip3] torchaudio==0.8.0a0+2c8aad9
[pip3] torchtext==0.9.0a0+651c1f7
[pip3] torchvision==0.9.0a0+1438b0c
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-include 2020.2 260
[conda] mkl-service 2.3.0 py38h9ed2024_0
[conda] mkl_fft 1.3.0 py38ha059aab_0
[conda] mkl_random 1.1.1 py38h959d312_0
[conda] numpy 1.19.2 py38h456fd55_0
[conda] numpy-base 1.19.2 py38hcfb5961_0
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] torch 1.8.0a0 pypi_0 pypi
[conda] torchaudio 0.8.0a0+f2da586 pypi_0 pypi
[conda] torchtext 0.9.0a0+651c1f7 dev_0 <develop>
[conda] torchvision 0.9.0a0+1438b0c dev_0 <develop>
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity
| 2021-03-10T07:14:15 |
|
pytorch/pytorch | 53,822 | pytorch__pytorch-53822 | [
"53501"
] | 30712fca7e45de33f47b64c6b664c380018743d6 | diff --git a/torch/distributed/rpc/options.py b/torch/distributed/rpc/options.py
--- a/torch/distributed/rpc/options.py
+++ b/torch/distributed/rpc/options.py
@@ -73,16 +73,16 @@ def set_device_map(self, to: str, device_map: Dict):
>>> # on worker 0
>>> options = TensorPipeRpcBackendOptions(
>>> num_worker_threads=8,
- >>> device_maps={"worker1": {0, 1}}
+ >>> device_maps={"worker1": {0: 1}}
>>> # maps worker0's cuda:0 to worker1's cuda:1
>>> )
- >>> options.set_device_map("worker1", {1, 2})
+ >>> options.set_device_map("worker1", {1: 2})
>>> # maps worker0's cuda:1 to worker1's cuda:2
>>>
>>> rpc.init_rpc(
>>> "worker0",
>>> rank=0,
- >>> world_size=2
+ >>> world_size=2,
>>> backend=rpc.BackendType.TENSORPIPE,
>>> rpc_backend_options=options
>>> )
@@ -94,7 +94,7 @@ def set_device_map(self, to: str, device_map: Dict):
>>> # the device map, and hence will be moved back to cuda:0 and
>>> # cuda:1 on worker0
>>> print(rets[0]) # tensor([2., 2.], device='cuda:0')
- >>> print(rets[0]) # tensor([2., 2.], device='cuda:1')
+ >>> print(rets[1]) # tensor([2., 2.], device='cuda:1')
"""
device_index_map = {}
curr_device_maps = super().device_maps
| Problems in TensorPipeRpcBackendOptions device mapping documentation?
## π Documentation
The new release of PyTorch 1.8 introduces CUDA-support in RPC.
I've referred to the RPC documentation, and the only reference for the CUDA-support I could find is under [`TensorPipeRpcBackendOptions`](https://pytorch.org/docs/1.8.0/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions) and [`set_device_map`](https://pytorch.org/docs/1.8.0/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map).
Seems like setting up CUDA-support is simply done by supplying a device mapping in the `TensorPipeRpcBackendOptions`, pretty cool.
However, I find the documentation for the `device_maps`/`device_map` to be unclear. It seems that `TensorPipeRpcBackendOptions`'s `device_maps` is a dictionary where the keys are worker names, but I'm not exactly sure what the structure of the dictionary's values should be like? Supposedly each value should be some sort of dictionary (as indicated by the parameter's type - `Dict[str, Dict]`), yet the example code provides a set: `device_maps={"worker1": {0, 1}}`. I don't really understand how does this "map worker0's cuda:0 to worker1's cuda:1"?
Same for `set_device_map`'s `device_map`, the parameter's type also indicates it's a dictionary (`(Dict of python:int, str, or torch.device)`), but doesn't quite explain its structure. And again, the example code provides a set: `options.set_device_map("worker1", {1, 2})`.
It is also not explained how to define a GPU->CPU mapping (or vice versa).
Apart for this, there are 2 obvious errors in the example code provided in that documentation:
1. There is a missing comma in the following part:
```python
>>> rpc.init_rpc(
>>> "worker0",
>>> rank=0,
>>> world_size=2 # <-- missing comma
>>> backend=rpc.BackendType.TENSORPIPE,
>>> rpc_backend_options=options
>>> )
```
2. I don't see how it is possible that those two `print`s will give different results. I'm guessing that the second line should read `print(rets[1])`?
```python
>>> print(rets[0]) # tensor([2., 2.], device='cuda:0')
>>> print(rets[0]) # tensor([2., 2.], device='cuda:1')
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @jjlilley @osalpekar @jiayisuse @mrzzd @agolynski @SciPioneer @H-Huang @cbalioglu
| Hey @rafi-cohen, thanks a lot for spotting the error in the doc, we will fix that asap.
Currently, the CUDA RPC support is a prototype feature and at an early stage. Please use nightly builds to get the most recent implementation. The tests below can serve as a reference of how it can be used:
https://github.com/pytorch/pytorch/blob/d54be1a9467db5075256d229ea1b01f1a4bcba8d/torch/testing/_internal/distributed/rpc/rpc_test.py#L4625-L5359
> It is also not explained how to define a GPU->CPU mapping (or vice versa).
It does not support direct GPU-to-CPU communication yet. We would like to hear your use case. Is GPU-to-CPU/CPU-to-GPU a requirement for your application? Is today's default mapping API sufficient for your use case or do you need per-RPC device map?
@rafi-cohen please let me know if you would like to see any other information added to #53508.
We are testing this feature on some large models internally. When we are confident there are no major gaps/flaws in the implementation, we will add a tutorial/recipe for it, likely together with when we graduate this to a beta feature in the next release. | 2021-03-11T16:31:13 |
|
pytorch/pytorch | 54,587 | pytorch__pytorch-54587 | [
"53507"
] | 0eba63ec9309898c4d62d3c3a5d17568447f16ec | diff --git a/torch/onnx/symbolic_helper.py b/torch/onnx/symbolic_helper.py
--- a/torch/onnx/symbolic_helper.py
+++ b/torch/onnx/symbolic_helper.py
@@ -296,6 +296,11 @@ def _is_fp(value):
return (type == 'Float') or (type == 'Double') or (type == 'Half')
return False
+def _dtype_is_fp(type_value):
+ if type_value:
+ return (type_value == torch.float16) or (type_value == torch.float32) or (type_value == torch.float64)
+ return False
+
def _generate_wrapped_number(g, scalar):
"""
Create a wrapped number based on https://github.com/pytorch/pytorch/issues/9515
diff --git a/torch/onnx/symbolic_opset9.py b/torch/onnx/symbolic_opset9.py
--- a/torch/onnx/symbolic_opset9.py
+++ b/torch/onnx/symbolic_opset9.py
@@ -3014,3 +3014,22 @@ def linear(g, input, weight, bias):
output = add(g, bias, output)
return output
+
+@parse_args('v', 'b', 'i', 'v', 'v', 'v', 'v')
+def hann_window(g, window_length, periodic=True, dtype=None, layout=None, device=None, pin_memory=None, requires_grad=False):
+ if dtype is None:
+ dtype = torch.get_default_dtype()
+ if sym_help._dtype_is_fp(dtype) is False:
+ dtype = torch.float
+ dtype = sym_help.scalar_type_to_pytorch_type.index(dtype)
+
+ n_array = arange(g, window_length, 4, None, None, None)
+ output = g.op('Cast', n_array, to_i=sym_help.cast_pytorch_to_onnx['Float'])
+ output = mul(g, g.op('Constant', value_t=torch.tensor(math.pi, dtype=torch.float)), output)
+
+ if periodic is False:
+ window_length = sub(g, window_length, g.op("Constant", value_t=torch.tensor(1, dtype=torch.int)))
+ output = div(g, output, window_length)
+ output = g.op("Cast", square(g, sin(g, output)), to_i=sym_help.scalar_type_to_onnx[dtype])
+
+ return output
| diff --git a/test/onnx/test_pytorch_onnx_onnxruntime.py b/test/onnx/test_pytorch_onnx_onnxruntime.py
--- a/test/onnx/test_pytorch_onnx_onnxruntime.py
+++ b/test/onnx/test_pytorch_onnx_onnxruntime.py
@@ -7777,6 +7777,60 @@ def forward(self, input_ids):
self.run_test(M(), (x,), input_names=['input_ids'],
dynamic_axes={'input_ids': {0: 'batch', 1: 'sequence'}})
+ @skipIfUnsupportedMinOpsetVersion(9)
+ def test_hann_window_periodic(self):
+ class HannWindowModule_Periodic(torch.nn.Module):
+ def __init__(self):
+ super(HannWindowModule_Periodic, self).__init__()
+ self.window_length = 0
+
+ def forward(self, x, window_length: int):
+ self.window_length = window_length
+ return torch.add(x, torch.hann_window(self.window_length, periodic=True, dtype=torch.float))
+
+ win_length = 100
+ x = torch.randn(win_length)
+
+ module = HannWindowModule_Periodic()
+ self.run_test(module, (x, win_length))
+
+ @skipIfUnsupportedMinOpsetVersion(9)
+ def test_hann_window_not_periodic(self):
+ class HannWindowModule_NotPeriodic(torch.nn.Module):
+ def __init__(self):
+ super(HannWindowModule_NotPeriodic, self).__init__()
+ self.window_length = 0
+
+ def forward(self, x, window_length: int):
+ self.window_length = window_length
+ return torch.add(x, torch.hann_window(self.window_length, periodic=False, dtype=torch.float))
+
+ win_length = 100
+ x = torch.randn(win_length)
+
+ module = HannWindowModule_NotPeriodic()
+ self.run_test(module, (x, win_length))
+
+ @skipIfUnsupportedMinOpsetVersion(9)
+ @disableScriptTest()
+ def test_hann_window_default_values(self):
+ class HannWindowModule(torch.nn.Module):
+ def __init__(self):
+ super(HannWindowModule, self).__init__()
+ self.window_length = 0
+
+ def forward(self, x, window_length: int):
+ import torch.nn.functional as F
+ self.window_length = window_length
+ return torch.add(x, F.relu(torch.hann_window(self.window_length)))
+
+ win_length = 100
+ x = torch.randn(win_length, dtype=torch.float)
+ module = HannWindowModule()
+
+ output = module(x, win_length)
+ self.run_test(module, (x, win_length))
+
def make_test(name, base, layer, bidirectional, initial_state,
variable_length, dropout,
**extra_kwargs):
| Tensor indexing issue in onnx
## π Bug
This is a sibling issue for https://github.com/microsoft/onnxruntime/issues/6910 as they suggested to report here too.
It seems that Tensor indexing is not fully supported once exported to ONNX:
## To Reproduce
```py
import io
import torch
from torch import Tensor
import onnxruntime
def f() -> Tensor:
mask = torch.zeros(100, dtype=torch.bool)
indices = (torch.rand(25) * mask.shape[0]).to(torch.int64)
mask[indices] = True # offending line
return mask
class Module(torch.nn.Module):
def forward(self, *args, **kwargs):
return f()
model = Module()
model.eval()
model() # works fine
onnx_io = io.BytesIO()
torch.onnx.export(model, [], onnx_io, opset_version=11)
ort_session = onnxruntime.InferenceSession(onnx_io.getvalue())
ort_outs = ort_session.run(None, {}) # errors
```
```
/Users/nicolashug/opt/miniconda3/envs/pt/lib/python3.8/site-packages/torch/onnx/utils.py:347: UserWarning: No input args
warnings.warn("No input args")
2021-03-05 14:58:13.338019 [E:onnxruntime:, inference_session.cc:1293 operator()] Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{}, requested shape:{25}
Traceback (most recent call last):
File "lol.py", line 24, in <module>
ort_session = onnxruntime.InferenceSession(onnx_io.getvalue())
File "/Users/nicolashug/opt/miniconda3/envs/pt/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 280, in __init__
self._create_inference_session(providers, provider_options)
File "/Users/nicolashug/opt/miniconda3/envs/pt/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 312, in _create_inference_session
sess.initialize_session(providers, provider_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{}, requested shape:{25}
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Environment
Collecting environment information...
PyTorch version: 1.8.0a0+ad7d208
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 10.15.7 (x86_64)
GCC version: Could not collect
Clang version: 12.0.0 (clang-1200.0.32.29)
CMake version: version 3.18.2
Python version: 3.8 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.2
[pip3] pytorch-sphinx-theme==0.0.24
[pip3] torch==1.8.0a0
[pip3] torchaudio==0.8.0a0+2c8aad9
[pip3] torchtext==0.9.0a0+651c1f7
[pip3] torchvision==0.9.0a0+1438b0c
[conda] blas 1.0 mkl
[conda] mkl 2019.4 233
[conda] mkl-include 2020.2 260
[conda] mkl-service 2.3.0 py38h9ed2024_0
[conda] mkl_fft 1.3.0 py38ha059aab_0
[conda] mkl_random 1.1.1 py38h959d312_0
[conda] numpy 1.19.2 py38h456fd55_0
[conda] numpy-base 1.19.2 py38hcfb5961_0
[conda] pytorch-sphinx-theme 0.0.24 dev_0 <develop>
[conda] torch 1.8.0a0 pypi_0 pypi
[conda] torchaudio 0.8.0a0+f2da586 pypi_0 pypi
[conda] torchtext 0.9.0a0+651c1f7 dev_0 <develop>
[conda] torchvision 0.9.0a0+1438b0c dev_0 <develop>
cc @houseroad @spandantiwari @lara-hdr @BowenBao @neginraoof @SplitInfinity
| @NicolasHug PR #53690 should resolve this issue, could you please confirm whether these changes resolved your issue | 2021-03-24T12:16:19 |
pytorch/pytorch | 60,743 | pytorch__pytorch-60743 | [
"60741"
] | 7c2938bf672627dd61642bebbdc6db217cdfc61b | diff --git a/tools/linter/clang_tidy.py b/tools/linter/clang_tidy.py
--- a/tools/linter/clang_tidy.py
+++ b/tools/linter/clang_tidy.py
@@ -377,6 +377,8 @@ def main() -> None:
shutil.copyfile(fname, mapped_fname)
pwd = os.getcwd() + "/"
+ if options.dry_run:
+ print(clang_tidy_output)
for line in clang_tidy_output.splitlines():
if line.startswith(pwd):
print(line[len(pwd):])
| `--dry-run` flag doesn't print anything (`tools/linter/clang_tidy.py`)
## Bug
Running `tools/linter/clang_tidy.py` with the `--dry-run` option doesn't print anything.
## To Reproduce
Run the following command
```
python3 tools/linter/clang_tidy.py --paths torch/csrc/fx --dry-run
```
Output:
```
```
Expected Output:
```
clang-tidy -p build -config '{"InheritParentConfig": true, "Checks": " bugprone-*, -bugprone-forward-declaration-namespace, -bugprone-macro-parentheses, -bugprone-lambda-function-name, -bugprone-reserved-identifier, cppcoreguidelines-*, -cppcoreguidelines-avoid-magic-numbers, -cppcoreguidelines-interfaces-global-init, -cppcoreguidelines-macro-usage, -cppcoreguidelines-owning-memory, -cppcoreguidelines-pro-bounds-array-to-pointer-decay, -cppcoreguidelines-pro-bounds-constant-array-index, -cppcoreguidelines-pro-bounds-pointer-arithmetic, -cppcoreguidelines-pro-type-cstyle-cast, -cppcoreguidelines-pro-type-reinterpret-cast, -cppcoreguidelines-pro-type-static-cast-downcast, -cppcoreguidelines-pro-type-union-access, -cppcoreguidelines-pro-type-vararg, -cppcoreguidelines-special-member-functions, -facebook-hte-RelativeInclude, hicpp-exception-baseclass, hicpp-avoid-goto, modernize-*, -modernize-concat-nested-namespaces, -modernize-return-braced-init-list, -modernize-use-auto, -modernize-use-default-member-init, -modernize-use-using, -modernize-use-trailing-return-type, performance-*, -performance-noexcept-move-constructor, -performance-unnecessary-value-param, ", "HeaderFilterRegex": "torch/csrc/.*", "AnalyzeTemporaryDtors": false, "CheckOptions": null}' torch/csrc/fx/fx_init.cpp
```
cc @pytorch/pytorch-dev-infra
| 2021-06-25T15:06:31 |
||
pytorch/pytorch | 64,271 | pytorch__pytorch-64271 | [
"60417"
] | 82174330d0bae4e2356295e16e261052f1d0ff8c | diff --git a/torch/fx/graph.py b/torch/fx/graph.py
--- a/torch/fx/graph.py
+++ b/torch/fx/graph.py
@@ -923,11 +923,13 @@ def emit_node(node : Node):
return
qualified_name = _get_qualified_name(node.target)
global_name = add_global(qualified_name, node.target)
+ # special case for getattr: node.args could be 2-argument or 3-argument
+ # 2-argument: attribute access; 3-argument: fall through to attrib function call with default value
if global_name == 'getattr' and \
isinstance(node.args, tuple) and \
isinstance(node.args[1], str) and \
- node.args[1].isidentifier():
- # pretty print attribute access
+ node.args[1].isidentifier() and \
+ len(node.args) == 2:
body.append(f'{repr(node)}{maybe_type_annotation} = {_format_target(repr(node.args[0]), node.args[1])}')
return
body.append(f'{repr(node)}{maybe_type_annotation} = {global_name}({_format_args(node.args, node.kwargs)})')
| diff --git a/test/test_fx.py b/test/test_fx.py
--- a/test/test_fx.py
+++ b/test/test_fx.py
@@ -95,6 +95,8 @@ def a_lifted_leaf2(a, b):
wrap('len')
+wrap('getattr')
+
@wrap
def wrapped_via_decorator(a):
return a + 1
@@ -926,6 +928,14 @@ def forward(self, x):
self.assertEqual(traced2(inp), inp + 3.0)
self.assertIs(len, builtins.len)
+ def test_torch_fx_getattr(self):
+ class FXGetattrTest(torch.nn.Module):
+ def forward(self, x):
+ return getattr(x, 'nonexistent_attr', torch.Tensor([2, 3]))
+
+ traced = symbolic_trace(FXGetattrTest())
+ self.assertEqual(traced(torch.rand(3, 4)), torch.Tensor([2, 3]))
+
def test_sqrt(self):
class Sqrt1(torch.nn.Module):
def forward(self, x):
| [FX] getattr default not respected after symbolic_trace
## π Bug
Is during symbolic tracing a builtin.getattr is traced and it uses a default value, that default value is lost.
## To Reproduce
Steps to reproduce the behavior:
```
import torch
import torch.nn as nn
from torch.fx import symbolic_trace
class TestModule(nn.Module):
def __init__(self):
super().__init__()
def forward(self, a: torch.Tensor) -> torch.Size:
return getattr(a, "nonexistent_attr", torch.Size([1,2]))
m = TestModule()
traced = symbolic_trace(m)
# WRONG: AttributeError: 'Tensor' object has no attribute 'nonexistent_attr'
# traced(torch.rand(3, 4))
print(traced.graph)
"""
graph():
%a : torch.Tensor [#users=1] = placeholder[target=a]
%getattr_1 : [#users=1] = call_function[target=builtins.getattr](args = (%a, nonexistent_attr), kwargs = {})
return getattr_1
"""
```
## Expected behavior
If you inspect `traced.graph` you'll notice that no default value for getattr is provided, although it should be `torch.Size([1, 2])`. When you run the traced module you will encounter an exception about `a` not having `nonexistent_attr`, which should not be thrown because the default value to return instead be returned/should prevent the exception.
| Hm I'm not actually sure if it's possible to intercept the default behavior here. `getattr` is implemented as this C builtin function: https://github.com/python/cpython/blob/9af34c935185eca497617a216d141c72ffaeae9c/Python/bltinmodule.c#L1108 and the path that handles default values is entirely contained within that implementation; the default value is not visible in the dispatch implementation here https://github.com/python/cpython/blob/0fd27375cabd12e68a2f12cfeca11a2d5043429e/Objects/object.c#L944 which eventually delegates down into the object's `__getattr__` method
Potentially we could fix this externally by patching `builtins.getattr` during trace timeand installing the logic that way. | 2021-08-31T15:21:23 |
pytorch/pytorch | 64,679 | pytorch__pytorch-64679 | [
"61138"
] | 088ca37c5c10a35fd52dd5b1cce7d112f576b157 | diff --git a/torch/distributed/run.py b/torch/distributed/run.py
--- a/torch/distributed/run.py
+++ b/torch/distributed/run.py
@@ -520,7 +520,7 @@ def config_from_args(args) -> Tuple[LaunchConfig, Union[Callable, str], List[str
nproc_per_node = determine_local_world_size(args.nproc_per_node)
if "OMP_NUM_THREADS" not in os.environ and nproc_per_node > 1:
omp_num_threads = 1
- print(
+ log.warning(
f"*****************************************\n"
f"Setting OMP_NUM_THREADS environment variable for each process to be "
f"{omp_num_threads} in default, to avoid your system being overloaded, "
| torch.distributed.launch/run - make OMP warning message a log.warning rather than a stdout print
## π Feature
Make the warning message about the default `OMP_NUM_THREADS` set by `torch.distributed.launch` and `torch.distributed.run` use `logger.warning(msg)` rather than a print statement.
## Motivation
Users can set the `LOGLEVEL` environment variable when using `torch.distributed.launch` and `torch.distributed.run` CLI utils to control the python log level of the launcher program. However regardless of the overriden `LOGLEVEL` the following message prints:
```
$ python -m torch.distributed.run ~/test.py
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
```
This is because in https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py#L523-L530 the message
is printed using `print(msg)` rather than `log.warning(msg)`.
## Pitch
Simply change the `print()` to `log.warning()`.
## Alternatives
N/A - simple change.
## Additional context
Came up as part of the discussion https://github.com/pytorch/pytorch/issues/60716#issuecomment-872469733. The OMP message was left untouched for backwards compatibility and in torch<1.9.0 `torch.distributed.launch` used to not support user-overrides for `LOGLEVEL` (in torch-1.9.0 it does). Now that users do have an option to pass a custom LOGLEVEL it makes sense to make the OMP printout a `log.warning` so that users only interested in `LOGLEVEL=ERROR` don't have to see this message every time.
| IMPORTANT! - #60925 needs to land before working on this as #60925 fixes a few other issues with the distributed.launch CLI that needs to be present for one to properly test and validate the proposed change. | 2021-09-08T18:53:01 |
|
pytorch/pytorch | 65,924 | pytorch__pytorch-65924 | [
"65221"
] | 1fa17a20fce8ffb3bb8dd3615dee3e2e83c4932b | diff --git a/torch/utils/data/datapipes/iter/callable.py b/torch/utils/data/datapipes/iter/callable.py
--- a/torch/utils/data/datapipes/iter/callable.py
+++ b/torch/utils/data/datapipes/iter/callable.py
@@ -1,4 +1,3 @@
-import copy
import warnings
from torch.utils.data import IterDataPipe, _utils, functional_datapipe, DataChunk
from typing import Callable, Dict, Iterator, Optional, Sized, Tuple, TypeVar
@@ -99,8 +98,6 @@ def _apply_fn(self, data):
data = list(data)
else:
t_flag = False
- # Deepcopy data to prevent the original data modified. E.g. list, dict
- data = copy.deepcopy(data)
if self.output_col is None:
if isinstance(self.input_col, (list, tuple)):
diff --git a/torch/utils/data/datapipes/iter/utils.py b/torch/utils/data/datapipes/iter/utils.py
--- a/torch/utils/data/datapipes/iter/utils.py
+++ b/torch/utils/data/datapipes/iter/utils.py
@@ -1,3 +1,5 @@
+import copy
+import warnings
from torch.utils.data import IterDataPipe
@@ -8,12 +10,34 @@ class IterableWrapperIterDataPipe(IterDataPipe):
Args:
iterable: Iterable object to be wrapped into an IterDataPipe
+ deepcopy: Option to deepcopy input iterable object for each
+ iteration.
+
+ .. note::
+ If `deepcopy` is set to False explicitly, users should ensure
+ that data pipeline doesn't contain any in-place operations over
+ the iterable instance, in order to prevent data inconsistency
+ across iterations.
"""
- def __init__(self, iterable):
+ def __init__(self, iterable, deepcopy=True):
self.iterable = iterable
+ self.deepcopy = deepcopy
def __iter__(self):
- for data in self.iterable:
+ source_data = self.iterable
+ if self.deepcopy:
+ try:
+ source_data = copy.deepcopy(self.iterable)
+ # For the case that data cannot be deep-copied,
+ # all in-place operations will affect iterable variable.
+ # When this DataPipe is iterated second time, it will
+ # yield modified items.
+ except TypeError:
+ warnings.warn(
+ "The input iterable can not be deepcopied, "
+ "please be aware of in-place modification would affect source data"
+ )
+ for data in source_data:
yield data
def __len__(self):
| diff --git a/test/test_datapipe.py b/test/test_datapipe.py
--- a/test/test_datapipe.py
+++ b/test/test_datapipe.py
@@ -1,3 +1,4 @@
+import copy
import http.server
import itertools
import os
@@ -414,13 +415,30 @@ def test_demux_mux_datapipe(self):
# Test Case: Uneven DataPipes
source_numbers = list(range(0, 10)) + [10, 12]
- numbers_dp = IDP(source_numbers)
+ numbers_dp = dp.iter.IterableWrapper(source_numbers)
n1, n2 = numbers_dp.demux(2, lambda x: x % 2)
self.assertEqual([0, 2, 4, 6, 8, 10, 12], list(n1))
self.assertEqual([1, 3, 5, 7, 9], list(n2))
n = n1.mux(n2)
self.assertEqual(source_numbers, list(n))
+ @suppress_warnings # Suppress warning for lambda fn
+ def test_map_with_col_file_handle_datapipe(self):
+ temp_dir = self.temp_dir.name
+ datapipe1 = dp.iter.FileLister(temp_dir, '')
+ datapipe2 = dp.iter.FileLoader(datapipe1)
+
+ def _helper(datapipe):
+ dp1 = datapipe.map(lambda x: x.read(), input_col=1)
+ dp2 = datapipe.map(lambda x: (x[0], x[1].read()))
+ self.assertEqual(list(dp1), list(dp2))
+
+ # tuple
+ _helper(datapipe2)
+ # list
+ datapipe3 = datapipe2.map(lambda x: list(x))
+ _helper(datapipe3)
+
class TestDataFramesPipes(TestCase):
"""
@@ -619,24 +637,12 @@ def __init__(self, input_dp):
super().__init__()
self.input_dp = input_dp
+ # Prevent in-place modification
def __iter__(self):
- for i in self.input_dp:
- yield i
-
-
-class IDP(IterDataPipe):
- def __init__(self, input_dp):
- super().__init__()
- self.input_dp = input_dp
- self.length = len(input_dp)
-
- def __iter__(self):
- for i in self.input_dp:
+ input_dp = self.input_dp if isinstance(self.input_dp, IterDataPipe) else copy.deepcopy(self.input_dp)
+ for i in input_dp:
yield i
- def __len__(self):
- return self.length
-
class MDP(MapDataPipe):
def __init__(self, input_dp):
@@ -669,19 +675,19 @@ class TestFunctionalIterDataPipe(TestCase):
def _test_picklable(self):
arr = range(10)
picklable_datapipes: List[Tuple[Type[IterDataPipe], IterDataPipe, Tuple, Dict[str, Any]]] = [
- (dp.iter.Mapper, IDP(arr), (), {}),
- (dp.iter.Mapper, IDP(arr), (_fake_fn, (0, ), {'test': True}), {}),
- (dp.iter.Collator, IDP(arr), (), {}),
- (dp.iter.Collator, IDP(arr), (_fake_fn, (0, ), {'test': True}), {}),
- (dp.iter.Filter, IDP(arr), (_fake_filter_fn, (0, ), {'test': True}), {}),
+ (dp.iter.Mapper, dp.iter.IterableWrapper(arr), (), {}),
+ (dp.iter.Mapper, dp.iter.IterableWrapper(arr), (_fake_fn, (0, ), {'test': True}), {}),
+ (dp.iter.Collator, dp.iter.IterableWrapper(arr), (), {}),
+ (dp.iter.Collator, dp.iter.IterableWrapper(arr), (_fake_fn, (0, ), {'test': True}), {}),
+ (dp.iter.Filter, dp.iter.IterableWrapper(arr), (_fake_filter_fn, (0, ), {'test': True}), {}),
]
for dpipe, input_dp, dp_args, dp_kwargs in picklable_datapipes:
p = pickle.dumps(dpipe(input_dp, *dp_args, **dp_kwargs)) # type: ignore[call-arg]
unpicklable_datapipes: List[Tuple[Type[IterDataPipe], IterDataPipe, Tuple, Dict[str, Any]]] = [
- (dp.iter.Mapper, IDP(arr), (lambda x: x, ), {}),
- (dp.iter.Collator, IDP(arr), (lambda x: x, ), {}),
- (dp.iter.Filter, IDP(arr), (lambda x: x >= 5, ), {}),
+ (dp.iter.Mapper, dp.iter.IterableWrapper(arr), (lambda x: x, ), {}),
+ (dp.iter.Collator, dp.iter.IterableWrapper(arr), (lambda x: x, ), {}),
+ (dp.iter.Filter, dp.iter.IterableWrapper(arr), (lambda x: x >= 5, ), {}),
]
for dpipe, input_dp, dp_args, dp_kwargs in unpicklable_datapipes:
with warnings.catch_warnings(record=True) as wa:
@@ -692,8 +698,8 @@ def _test_picklable(self):
p = pickle.dumps(datapipe)
def test_concat_datapipe(self):
- input_dp1 = IDP(range(10))
- input_dp2 = IDP(range(5))
+ input_dp1 = dp.iter.IterableWrapper(range(10))
+ input_dp2 = dp.iter.IterableWrapper(range(5))
with self.assertRaisesRegex(ValueError, r"Expected at least one DataPipe"):
dp.iter.Concater()
@@ -718,7 +724,7 @@ def test_concat_datapipe(self):
def test_fork_datapipe(self):
- input_dp = IDP(range(10))
+ input_dp = dp.iter.IterableWrapper(range(10))
with self.assertRaises(ValueError):
input_dp.fork(num_instances=0)
@@ -836,7 +842,7 @@ def test_fork_datapipe(self):
self.assertEqual(len(input_dp), len(dp3))
def test_demux_datapipe(self):
- input_dp = IDP(range(10))
+ input_dp = dp.iter.IterableWrapper(range(10))
with self.assertRaises(ValueError):
input_dp.demux(num_instances=0, classifier_fn=lambda x: 0)
@@ -882,8 +888,8 @@ def test_demux_datapipe(self):
self.assertEqual(list(range(0, 5)), output2)
# Test Case: classifer returns a value outside of [0, num_instance - 1]
- dp = input_dp.demux(num_instances=1, classifier_fn=lambda x: x % 2)
- it = iter(dp[0])
+ dp0 = input_dp.demux(num_instances=1, classifier_fn=lambda x: x % 2)
+ it = iter(dp0[0])
with self.assertRaises(ValueError):
next(it)
next(it)
@@ -960,7 +966,7 @@ def test_demux_datapipe(self):
@suppress_warnings # Suppress warning for lambda fn
def test_map_datapipe(self):
- input_dp = IDP(range(10))
+ input_dp = dp.iter.IterableWrapper(range(10))
def fn(item, dtype=torch.float, *, sum=False):
data = torch.tensor(item, dtype=dtype)
@@ -1005,7 +1011,7 @@ def fn_nn(d0, d1):
def _helper(ref_fn, fn, input_col=None, output_col=None):
for constr in (list, tuple):
- datapipe = IDP([constr((0, 1, 2)), constr((3, 4, 5)), constr((6, 7, 8))])
+ datapipe = dp.iter.IterableWrapper([constr((0, 1, 2)), constr((3, 4, 5)), constr((6, 7, 8))])
res_dp = datapipe.map(fn, input_col, output_col)
ref_dp = datapipe.map(ref_fn)
self.assertEqual(list(res_dp), list(ref_dp))
@@ -1072,9 +1078,11 @@ def _dict_update(data, newdata, remove_idx=None):
return _data
def _helper(ref_fn, fn, input_col=None, output_col=None):
- datapipe = IDP([{"x": 0, "y": 1, "z": 2},
- {"x": 3, "y": 4, "z": 5},
- {"x": 6, "y": 7, "z": 8}])
+ datapipe = dp.iter.IterableWrapper(
+ [{"x": 0, "y": 1, "z": 2},
+ {"x": 3, "y": 4, "z": 5},
+ {"x": 6, "y": 7, "z": 8}]
+ )
res_dp = datapipe.map(fn, input_col, output_col)
ref_dp = datapipe.map(ref_fn)
self.assertEqual(list(res_dp), list(ref_dp))
@@ -1117,7 +1125,7 @@ def _helper(ref_fn, fn, input_col=None, output_col=None):
# TODO(VitalyFedyunin): If dill installed this test fails
def _test_map_datapipe_nested_level(self):
- input_dp = IDP([list(range(10)) for _ in range(3)])
+ input_dp = dp.iter.IterableWrapper([list(range(10)) for _ in range(3)])
def fn(item, *, dtype=torch.float):
return torch.tensor(item, dtype=dtype)
@@ -1153,7 +1161,7 @@ def fn(item, *, dtype=torch.float):
def test_collate_datapipe(self):
arrs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
- input_dp = IDP(arrs)
+ input_dp = dp.iter.IterableWrapper(arrs)
def _collate_fn(batch):
return torch.tensor(sum(batch), dtype=torch.float)
@@ -1172,7 +1180,7 @@ def _collate_fn(batch):
def test_batch_datapipe(self):
arrs = list(range(10))
- input_dp = IDP(arrs)
+ input_dp = dp.iter.IterableWrapper(arrs)
with self.assertRaises(AssertionError):
input_dp.batch(batch_size=0)
@@ -1200,7 +1208,7 @@ def test_batch_datapipe(self):
def test_unbatch_datapipe(self):
target_length = 6
- prebatch_dp = IDP(range(target_length))
+ prebatch_dp = dp.iter.IterableWrapper(range(target_length))
input_dp = prebatch_dp.batch(3)
unbatch_dp = input_dp.unbatch()
@@ -1208,13 +1216,13 @@ def test_unbatch_datapipe(self):
for i, res in zip(prebatch_dp, unbatch_dp):
self.assertEqual(i, res)
- input_dp = IDP([[0, 1, 2], [3, 4, 5]])
+ input_dp = dp.iter.IterableWrapper([[0, 1, 2], [3, 4, 5]])
unbatch_dp = input_dp.unbatch()
self.assertEqual(len(list(unbatch_dp)), target_length)
for i, res in zip(prebatch_dp, unbatch_dp):
self.assertEqual(i, res)
- input_dp = IDP([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])
+ input_dp = dp.iter.IterableWrapper([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])
unbatch_dp = input_dp.unbatch()
expected_dp = [[0, 1], [2, 3], [4, 5], [6, 7]]
@@ -1233,7 +1241,7 @@ def test_unbatch_datapipe(self):
for i, res in zip(expected_dp2, unbatch_dp):
self.assertEqual(i, res)
- input_dp = IDP([[0, 1, 2], [3, 4, 5]])
+ input_dp = dp.iter.IterableWrapper([[0, 1, 2], [3, 4, 5]])
with self.assertRaises(ValueError):
unbatch_dp = input_dp.unbatch(unbatch_level=-2)
for i in unbatch_dp:
@@ -1245,7 +1253,7 @@ def test_unbatch_datapipe(self):
print(i)
def test_bucket_batch_datapipe(self):
- input_dp = IDP(range(20))
+ input_dp = dp.iter.IterableWrapper(range(20))
with self.assertRaises(AssertionError):
dp.iter.BucketBatcher(input_dp, batch_size=0)
@@ -1258,7 +1266,7 @@ def _helper(**kwargs):
data_len = 100
arrs = list(range(data_len))
random.shuffle(arrs)
- input_dp = IDP(arrs)
+ input_dp = dp.iter.IterableWrapper(arrs)
bucket_dp = dp.iter.BucketBatcher(input_dp, **kwargs)
self.assertEqual(len(bucket_dp), data_len // 3 if kwargs['drop_last'] else data_len // 3 + 1)
@@ -1291,7 +1299,7 @@ def _sort_fn(data):
def test_filter_datapipe(self):
- input_ds = IDP(range(10))
+ input_ds = dp.iter.IterableWrapper(range(10))
def _filter_fn(data, val, clip=False):
if clip:
@@ -1318,7 +1326,7 @@ def _non_bool_fn(data):
def test_filter_datapipe_nested_list(self):
- input_ds = IDP(range(10)).batch(5)
+ input_ds = dp.iter.IterableWrapper(range(10)).batch(5)
def _filter_fn(data, val):
return data >= val
@@ -1340,7 +1348,7 @@ def _filter_fn(data, val):
filter_dp = input_ds.filter(nesting_level=5, filter_fn=_filter_fn, fn_kwargs={'val': 5})
temp = list(filter_dp)
- input_ds = IDP(range(10)).batch(3)
+ input_ds = dp.iter.IterableWrapper(range(10)).batch(3)
filter_dp = input_ds.filter(lambda ls: len(ls) >= 3)
expected_dp3: List[List[int]] = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
@@ -1348,21 +1356,21 @@ def _filter_fn(data, val):
for data, exp in zip(filter_dp, expected_dp3):
self.assertEqual(data, exp)
- input_ds = IDP([[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [1, 2, 3]]])
+ input_ds = dp.iter.IterableWrapper([[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [1, 2, 3]]])
filter_dp = input_ds.filter(lambda x: x > 3, nesting_level=-1)
expected_dp4 = [[[4, 5]], [[6, 7, 8]]]
self.assertEqual(len(list(filter_dp)), len(expected_dp4))
for data2, exp2 in zip(filter_dp, expected_dp4):
self.assertEqual(data2, exp2)
- input_ds = IDP([[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [1, 2, 3]]])
+ input_ds = dp.iter.IterableWrapper([[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [1, 2, 3]]])
filter_dp = input_ds.filter(lambda x: x > 7, nesting_level=-1)
expected_dp5 = [[[8]]]
self.assertEqual(len(list(filter_dp)), len(expected_dp5))
for data2, exp2 in zip(filter_dp, expected_dp5):
self.assertEqual(data2, exp2)
- input_ds = IDP([[[0, 1], [3, 4]], [[6, 7, 8], [1, 2, 3]]])
+ input_ds = dp.iter.IterableWrapper([[[0, 1], [3, 4]], [[6, 7, 8], [1, 2, 3]]])
filter_dp = input_ds.filter(lambda ls: len(ls) >= 3, nesting_level=1)
expected_dp6 = [[[6, 7, 8], [1, 2, 3]]]
self.assertEqual(len(list(filter_dp)), len(expected_dp6))
@@ -1370,7 +1378,7 @@ def _filter_fn(data, val):
self.assertEqual(data2, exp2)
def test_sampler_datapipe(self):
- input_dp = IDP(range(10))
+ input_dp = dp.iter.IterableWrapper(range(10))
# Default SequentialSampler
sampled_dp = dp.iter.Sampler(input_dp) # type: ignore[var-annotated]
self.assertEqual(len(sampled_dp), 10)
@@ -1387,7 +1395,7 @@ def test_sampler_datapipe(self):
def test_shuffle_datapipe(self):
exp = list(range(20))
- input_ds = IDP(exp)
+ input_ds = dp.iter.IterableWrapper(exp)
with self.assertRaises(AssertionError):
shuffle_dp = input_ds.shuffle(buffer_size=0)
@@ -1413,15 +1421,15 @@ def test_shuffle_datapipe(self):
def test_zip_datapipe(self):
with self.assertRaises(TypeError):
- dp.iter.Zipper(IDP(range(10)), list(range(10))) # type: ignore[arg-type]
+ dp.iter.Zipper(dp.iter.IterableWrapper(range(10)), list(range(10))) # type: ignore[arg-type]
- zipped_dp = dp.iter.Zipper(IDP(range(10)), IDP_NoLen(range(5))) # type: ignore[var-annotated]
+ zipped_dp = dp.iter.Zipper(dp.iter.IterableWrapper(range(10)), IDP_NoLen(range(5))) # type: ignore[var-annotated]
with self.assertRaisesRegex(TypeError, r"instance doesn't have valid length$"):
len(zipped_dp)
exp = list((i, i) for i in range(5))
self.assertEqual(list(zipped_dp), exp)
- zipped_dp = dp.iter.Zipper(IDP(range(10)), IDP(range(5)))
+ zipped_dp = dp.iter.Zipper(dp.iter.IterableWrapper(range(10)), dp.iter.IterableWrapper(range(5)))
self.assertEqual(len(zipped_dp), 5)
self.assertEqual(list(zipped_dp), exp)
# Reset
@@ -1506,32 +1514,32 @@ def fn(item, dtype=torch.float, *, sum=False):
def test_mux_datapipe(self):
# Test Case: Elements are yielded one at a time from each DataPipe, until they are all exhausted
- input_dp1 = IDP(range(4))
- input_dp2 = IDP(range(4, 8))
- input_dp3 = IDP(range(8, 12))
+ input_dp1 = dp.iter.IterableWrapper(range(4))
+ input_dp2 = dp.iter.IterableWrapper(range(4, 8))
+ input_dp3 = dp.iter.IterableWrapper(range(8, 12))
output_dp = input_dp1.mux(input_dp2, input_dp3)
expected_output = [0, 4, 8, 1, 5, 9, 2, 6, 10, 3, 7, 11]
self.assertEqual(len(expected_output), len(output_dp))
self.assertEqual(expected_output, list(output_dp))
# Test Case: Uneven input Data Pipes
- input_dp1 = IDP([1, 2, 3, 4])
- input_dp2 = IDP([10])
- input_dp3 = IDP([100, 200, 300])
+ input_dp1 = dp.iter.IterableWrapper([1, 2, 3, 4])
+ input_dp2 = dp.iter.IterableWrapper([10])
+ input_dp3 = dp.iter.IterableWrapper([100, 200, 300])
output_dp = input_dp1.mux(input_dp2, input_dp3)
expected_output = [1, 10, 100, 2, 200, 3, 300, 4]
self.assertEqual(len(expected_output), len(output_dp))
self.assertEqual(expected_output, list(output_dp))
# Test Case: Empty Data Pipe
- input_dp1 = IDP([0, 1, 2, 3])
- input_dp2 = IDP([])
+ input_dp1 = dp.iter.IterableWrapper([0, 1, 2, 3])
+ input_dp2 = dp.iter.IterableWrapper([])
output_dp = input_dp1.mux(input_dp2)
self.assertEqual(len(input_dp1), len(output_dp))
self.assertEqual(list(input_dp1), list(output_dp))
# Test Case: raises TypeError when __len__ is called and an input doesn't have __len__
- input_dp1 = IDP(range(10))
+ input_dp1 = dp.iter.IterableWrapper(range(10))
input_dp_no_len = IDP_NoLen(range(10))
output_dp = input_dp1.mux(input_dp_no_len)
with self.assertRaises(TypeError):
@@ -1665,8 +1673,8 @@ def __iter__(self) -> Iterator[Tuple[int, str]]:
self.assertTrue(issubclass(DP1, IterDataPipe))
dp1 = DP1(10)
self.assertTrue(DP1.type.issubtype(dp1.type) and dp1.type.issubtype(DP1.type))
- dp2 = DP1(5)
- self.assertEqual(dp1.type, dp2.type)
+ dp1_ = DP1(5)
+ self.assertEqual(dp1.type, dp1_.type)
with self.assertRaisesRegex(TypeError, r"is not a generic class"):
class InvalidDP5(DP1[tuple]): # type: ignore[type-arg]
@@ -1679,10 +1687,10 @@ def __iter__(self) -> Iterator[T_co]:
yield d # type: ignore[misc]
self.assertTrue(issubclass(DP2, IterDataPipe))
- dp1 = DP2() # type: ignore[assignment]
- self.assertTrue(DP2.type.issubtype(dp1.type) and dp1.type.issubtype(DP2.type))
- dp2 = DP2() # type: ignore[assignment]
- self.assertEqual(dp1.type, dp2.type)
+ dp2 = DP2() # type: ignore[var-annotated]
+ self.assertTrue(DP2.type.issubtype(dp2.type) and dp2.type.issubtype(DP2.type))
+ dp2_ = DP2() # type: ignore[var-annotated]
+ self.assertEqual(dp2.type, dp2_.type)
class DP3(IterDataPipe[Tuple[T_co, str]]):
r""" DataPipe without fixed type with __init__ function"""
@@ -1695,10 +1703,10 @@ def __iter__(self) -> Iterator[Tuple[T_co, str]]:
yield d, str(d)
self.assertTrue(issubclass(DP3, IterDataPipe))
- dp1 = DP3(range(10)) # type: ignore[assignment]
- self.assertTrue(DP3.type.issubtype(dp1.type) and dp1.type.issubtype(DP3.type))
- dp2 = DP3(5) # type: ignore[assignment]
- self.assertEqual(dp1.type, dp2.type)
+ dp3 = DP3(range(10)) # type: ignore[var-annotated]
+ self.assertTrue(DP3.type.issubtype(dp3.type) and dp3.type.issubtype(DP3.type))
+ dp3_ = DP3(5) # type: ignore[var-annotated]
+ self.assertEqual(dp3.type, dp3_.type)
class DP4(IterDataPipe[tuple]):
r""" DataPipe without __iter__ annotation"""
@@ -1707,8 +1715,8 @@ def __iter__(self):
raise NotImplementedError
self.assertTrue(issubclass(DP4, IterDataPipe))
- dp = DP4()
- self.assertTrue(dp.type.param == tuple)
+ dp4 = DP4()
+ self.assertTrue(dp4.type.param == tuple)
class DP5(IterDataPipe):
r""" DataPipe without type annotation"""
@@ -1717,9 +1725,9 @@ def __iter__(self) -> Iterator[str]:
raise NotImplementedError
self.assertTrue(issubclass(DP5, IterDataPipe))
- dp = DP5() # type: ignore[assignment]
+ dp5 = DP5()
from torch.utils.data._typing import issubtype
- self.assertTrue(issubtype(dp.type.param, Any) and issubtype(Any, dp.type.param))
+ self.assertTrue(issubtype(dp5.type.param, Any) and issubtype(Any, dp5.type.param))
class DP6(IterDataPipe[int]):
r""" DataPipe with plain Iterator"""
@@ -1728,13 +1736,13 @@ def __iter__(self) -> Iterator:
raise NotImplementedError
self.assertTrue(issubclass(DP6, IterDataPipe))
- dp = DP6() # type: ignore[assignment]
- self.assertTrue(dp.type.param == int)
+ dp6 = DP6()
+ self.assertTrue(dp6.type.param == int)
class DP7(IterDataPipe[Awaitable[T_co]]):
r""" DataPipe with abstract base class"""
- self.assertTrue(issubclass(DP6, IterDataPipe))
+ self.assertTrue(issubclass(DP7, IterDataPipe))
self.assertTrue(DP7.type.param == Awaitable[T_co])
class DP8(DP7[str]):
@@ -1765,11 +1773,11 @@ def __iter__(self) -> Iterator[int]:
# Non-DataPipe input with DataPipe hint
datasource = [(1, '1'), (2, '2'), (3, '3')]
with self.assertRaisesRegex(TypeError, r"Expected argument 'dp' as a IterDataPipe"):
- dp = DP0(datasource)
+ dp0 = DP0(datasource)
- dp = DP0(IDP(range(10)))
+ dp0 = DP0(dp.iter.IterableWrapper(range(10)))
with self.assertRaisesRegex(TypeError, r"Expected type of argument 'dp' as a subtype"):
- dp = DP1(dp)
+ dp1 = DP1(dp0)
def test_runtime(self):
class DP(IterDataPipe[Tuple[int, T_co]]):
@@ -1784,26 +1792,26 @@ def __iter__(self) -> Iterator[Tuple[int, T_co]]:
dss = ([(1, '1'), (2, '2')],
[(1, 1), (2, '2')])
for ds in dss:
- dp = DP(ds) # type: ignore[var-annotated]
- self.assertEqual(list(dp), ds)
+ dp0 = DP(ds) # type: ignore[var-annotated]
+ self.assertEqual(list(dp0), ds)
# Reset __iter__
- self.assertEqual(list(dp), ds)
+ self.assertEqual(list(dp0), ds)
dss = ([(1, 1), ('2', 2)], # type: ignore[assignment, list-item]
[[1, '1'], [2, '2']], # type: ignore[list-item]
[1, '1', 2, '2'])
for ds in dss:
- dp = DP(ds)
+ dp0 = DP(ds)
with self.assertRaisesRegex(RuntimeError, r"Expected an instance as subtype"):
- list(dp)
+ list(dp0)
with runtime_validation_disabled():
- self.assertEqual(list(dp), ds)
+ self.assertEqual(list(dp0), ds)
with runtime_validation_disabled():
- self.assertEqual(list(dp), ds)
+ self.assertEqual(list(dp0), ds)
with self.assertRaisesRegex(RuntimeError, r"Expected an instance as subtype"):
- list(dp)
+ list(dp0)
def test_reinforce(self):
T = TypeVar('T', int, str)
@@ -1819,26 +1827,26 @@ def __iter__(self) -> Iterator[T]:
ds = list(range(10))
# Valid type reinforcement
- dp = DP(ds).reinforce_type(int)
- self.assertTrue(dp.type, int)
- self.assertEqual(list(dp), ds)
+ dp0 = DP(ds).reinforce_type(int)
+ self.assertTrue(dp0.type, int)
+ self.assertEqual(list(dp0), ds)
# Invalid type
with self.assertRaisesRegex(TypeError, r"'expected_type' must be a type"):
- dp = DP(ds).reinforce_type(1)
+ dp1 = DP(ds).reinforce_type(1)
# Type is not subtype
with self.assertRaisesRegex(TypeError, r"Expected 'expected_type' as subtype of"):
- dp = DP(ds).reinforce_type(float)
+ dp2 = DP(ds).reinforce_type(float)
# Invalid data at runtime
- dp = DP(ds).reinforce_type(str)
+ dp3 = DP(ds).reinforce_type(str)
with self.assertRaisesRegex(RuntimeError, r"Expected an instance as subtype"):
- list(dp)
+ list(dp3)
# Context Manager to disable the runtime validation
with runtime_validation_disabled():
- self.assertEqual(list(d for d in dp), ds)
+ self.assertEqual(list(d for d in dp3), ds)
class NumbersDataset(IterDataPipe):
@@ -1900,7 +1908,7 @@ def test_simple_sharding(self):
self.assertEqual(sorted(all_items), sorted(items))
def test_sharding_length(self):
- numbers_dp = IDP(range(13))
+ numbers_dp = dp.iter.IterableWrapper(range(13))
sharded_dp0 = numbers_dp.sharding_filter()
torch.utils.data.sharding.apply_sharding(sharded_dp0, 3, 0)
sharded_dp1 = numbers_dp.sharding_filter()
@@ -1912,7 +1920,7 @@ def test_sharding_length(self):
self.assertEqual(4, len(sharded_dp1))
self.assertEqual(4, len(sharded_dp2))
- numbers_dp = IDP(range(1))
+ numbers_dp = dp.iter.IterableWrapper(range(1))
sharded_dp0 = numbers_dp.sharding_filter()
torch.utils.data.sharding.apply_sharding(sharded_dp0, 2, 0)
sharded_dp1 = numbers_dp.sharding_filter()
@@ -1922,11 +1930,11 @@ def test_sharding_length(self):
@skipIfNoDill
def test_old_dataloader(self):
- dp = self._get_pipeline()
- expected = list(dp)
+ dp0 = self._get_pipeline()
+ expected = list(dp0)
- dp = self._get_pipeline().sharding_filter()
- dl = DataLoader(dp, batch_size=1, shuffle=False, num_workers=2,
+ dp0 = self._get_pipeline().sharding_filter()
+ dl = DataLoader(dp0, batch_size=1, shuffle=False, num_workers=2,
worker_init_fn=torch.utils.data.backward_compatibility.worker_init_fn)
items = []
for i in dl:
| [DataPipe] Mapper DataPipe should not deepcopy when index specified
## π Bug
I was adding the following line to prevent in-place modification of the data from source DataPipe.
https://github.com/pytorch/pytorch/blob/d37c02be08dfc022daf2ee1ddeda2a37b4551cac/torch/utils/data/datapipes/iter/callable.py#L102-L103
But, in fact, this would break when input includes file handle, because file handle can not be serialized.
So, in order to support file handle, we need to remove deepcopy. But, for the sake of preventing in-place modification, we need to add documentation to wiki about remove data attached to DataPipe instance. We prefer using iterator to generate data.
Then, we need to also change the `IterableWrapper` to do a `deepcopy` if possible. https://github.com/pytorch/pytorch/blob/a49907f984670781a718ef6aa0046709886eae5a/torch/utils/data/datapipes/iter/utils.py#L12-L17
cc @VitalyFedyunin @ejguan
| 2021-09-30T16:05:24 |
|
pytorch/pytorch | 65,926 | pytorch__pytorch-65926 | [
"63609"
] | 1fa17a20fce8ffb3bb8dd3615dee3e2e83c4932b | diff --git a/torch/utils/data/sampler.py b/torch/utils/data/sampler.py
--- a/torch/utils/data/sampler.py
+++ b/torch/utils/data/sampler.py
@@ -112,15 +112,18 @@ def num_samples(self) -> int:
def __iter__(self) -> Iterator[int]:
n = len(self.data_source)
if self.generator is None:
- self.generator = torch.Generator()
- self.generator.manual_seed(int(torch.empty((), dtype=torch.int64).random_().item()))
+ seed = int(torch.empty((), dtype=torch.int64).random_().item())
+ generator = torch.Generator()
+ generator.manual_seed(seed)
+ else:
+ generator = self.generator
if self.replacement:
for _ in range(self.num_samples // 32):
- yield from torch.randint(high=n, size=(32,), dtype=torch.int64, generator=self.generator).tolist()
- yield from torch.randint(high=n, size=(self.num_samples % 32,), dtype=torch.int64, generator=self.generator).tolist()
+ yield from torch.randint(high=n, size=(32,), dtype=torch.int64, generator=generator).tolist()
+ yield from torch.randint(high=n, size=(self.num_samples % 32,), dtype=torch.int64, generator=generator).tolist()
else:
- yield from torch.randperm(n, generator=self.generator).tolist()
+ yield from torch.randperm(n, generator=generator).tolist()
def __len__(self) -> int:
return self.num_samples
@@ -140,7 +143,8 @@ def __init__(self, indices: Sequence[int], generator=None) -> None:
self.generator = generator
def __iter__(self) -> Iterator[int]:
- return (self.indices[i] for i in torch.randperm(len(self.indices), generator=self.generator))
+ for i in torch.randperm(len(self.indices), generator=self.generator):
+ yield self.indices[i]
def __len__(self) -> int:
return len(self.indices)
@@ -183,7 +187,7 @@ def __init__(self, weights: Sequence[float], num_samples: int,
def __iter__(self) -> Iterator[int]:
rand_tensor = torch.multinomial(self.weights, self.num_samples, self.replacement, generator=self.generator)
- return iter(rand_tensor.tolist())
+ yield from iter(rand_tensor.tolist())
def __len__(self) -> int:
return self.num_samples
| diff --git a/test/test_dataloader.py b/test/test_dataloader.py
--- a/test/test_dataloader.py
+++ b/test/test_dataloader.py
@@ -1524,6 +1524,28 @@ def test_sampler_reproducibility(self):
):
self.assertEqual(list(fn()), list(fn()))
+ for sampler in (
+ RandomSampler(self.dataset, num_samples=5, replacement=True),
+ RandomSampler(self.dataset, replacement=False),
+ WeightedRandomSampler(weights, num_samples=5, replacement=True),
+ WeightedRandomSampler(weights, num_samples=5, replacement=False),
+ SubsetRandomSampler(range(10)),
+ ):
+ torch.manual_seed(0)
+ l1 = list(sampler) + list(sampler)
+
+ torch.manual_seed(0)
+ l2 = list(sampler) + list(sampler)
+ self.assertEqual(l1, l2)
+
+ its = (iter(sampler), iter(sampler))
+ ls = ([], [])
+ for idx in range(len(sampler)):
+ for i in range(2):
+ if idx == 0:
+ torch.manual_seed(0)
+ ls[i].append(next(its[i]))
+ self.assertEqual(ls[0], ls[1])
def _test_sampler(self, **kwargs):
indices = range(2, 12) # using a regular iterable
| Sampler should be seeded lazily
## π Bug
After #63026 is landed, the generator for Sampler is attached to the instance, which helps to serialize the state of Sampler. But, it brings a problem that it will prevent Sampler's generator being seeded before each epoch.
## To Reproduce
Check https://github.com/pytorch/pytorch/pull/63026#issuecomment-902234490
User would expect same result for the sampler without specifying generator input when set seed for each epoch.
```py
sampler = RandomSampler(ds)
torch.manual_seed(0)
l1 = list(sampler)
torch.manual_seed(0)
l2 = list(sampler)
# Expect same
assert l1 == l2
```
cc @SsnL @VitalyFedyunin @ejguan @NivekT
| 2021-09-30T16:36:39 |
|
pytorch/pytorch | 72,938 | pytorch__pytorch-72938 | [
"72693"
] | 89ee69e17354227e61fb12aec4bd8febad678bfc | diff --git a/.github/scripts/generate_ci_workflows.py b/.github/scripts/generate_ci_workflows.py
--- a/.github/scripts/generate_ci_workflows.py
+++ b/.github/scripts/generate_ci_workflows.py
@@ -629,9 +629,8 @@ def generate_workflow_file(self, workflow_template: jinja2.Template) -> None:
num_test_shards=2,
distributed_test=False,
enable_noarch_test=1,
- enable_xla_test=1,
ciflow_config=CIFlowConfig(
- labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_LINUX, LABEL_CIFLOW_CPU, LABEL_CIFLOW_XLA, LABEL_CIFLOW_NOARCH},
+ labels={LABEL_CIFLOW_DEFAULT, LABEL_CIFLOW_LINUX, LABEL_CIFLOW_CPU, LABEL_CIFLOW_NOARCH},
),
),
CIWorkflow(
@@ -661,6 +660,22 @@ def generate_workflow_file(self, workflow_template: jinja2.Template) -> None:
),
]
+XLA_WORKFLOWS = [
+ CIWorkflow(
+ arch="linux",
+ build_environment="pytorch-xla-linux-bionic-py3.7-clang8",
+ docker_image_base=f"{DOCKER_REGISTRY}/pytorch/xla_base",
+ test_runner_type=LINUX_CPU_TEST_RUNNER,
+ num_test_shards=2,
+ distributed_test=False,
+ enable_xla_test=1,
+ ciflow_config=CIFlowConfig(
+ labels={LABEL_CIFLOW_LINUX, LABEL_CIFLOW_CPU, LABEL_CIFLOW_XLA},
+ ),
+ ),
+
+]
+
ANDROID_SHORT_WORKFLOWS = [
CIWorkflow(
arch="linux",
@@ -836,7 +851,6 @@ def generate_workflow_file(self, workflow_template: jinja2.Template) -> None:
]
DOCKER_IMAGES = {
- f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-cuda10.2-cudnn7-py3.7-clang9", # for pytorch/xla
f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-rocm4.3.1-py3.7", # for rocm
f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-rocm4.5-py3.7", # for rocm
}
@@ -960,6 +974,7 @@ def main() -> None:
)
template_and_workflows = [
(jinja_env.get_template("linux_ci_workflow.yml.j2"), LINUX_WORKFLOWS),
+ (jinja_env.get_template("linux_ci_workflow.yml.j2"), XLA_WORKFLOWS),
(jinja_env.get_template("windows_ci_workflow.yml.j2"), WINDOWS_WORKFLOWS),
(jinja_env.get_template("bazel_ci_workflow.yml.j2"), BAZEL_WORKFLOWS),
(jinja_env.get_template("ios_ci_workflow.yml.j2"), IOS_WORKFLOWS),
| diff --git a/.github/scripts/generate_pytorch_test_matrix.py b/.github/scripts/generate_pytorch_test_matrix.py
--- a/.github/scripts/generate_pytorch_test_matrix.py
+++ b/.github/scripts/generate_pytorch_test_matrix.py
@@ -61,6 +61,7 @@ def run_as_if_on_trunk() -> bool:
return current_workflow_triggered_by_label
def main() -> None:
+ INCLUDE_DEFAULT_TEST = True
TEST_RUNNER_TYPE = os.getenv('TEST_RUNNER_TYPE')
assert TEST_RUNNER_TYPE is not None
RUN_SMOKE_TESTS_ONLY_ON_PR = os.getenv('RUN_SMOKE_TESTS_ONLY_ON_PR')
@@ -99,6 +100,7 @@ def main() -> None:
configs['backwards_compat'] = {'num_shards': 1, 'runner': TEST_RUNNER_TYPE}
if os.getenv('ENABLE_XLA_TEST'):
configs['xla'] = {'num_shards': 1, 'runner': TEST_RUNNER_TYPE}
+ INCLUDE_DEFAULT_TEST = False
if os.getenv('ENABLE_NOARCH_TEST'):
configs['noarch'] = {'num_shards': 1, 'runner': TEST_RUNNER_TYPE}
if RUN_SMOKE_TESTS:
@@ -112,6 +114,7 @@ def main() -> None:
'runner': TEST_RUNNER_TYPE,
}
for shard in range(1, NUM_TEST_SHARDS + 1)
+ if INCLUDE_DEFAULT_TEST
] + [
{
'config': name,
| Create a CI workflow for XLA testing using the XLA test image
### π The feature, motivation and pitch
We have onboarded XLA testing as part of one of the existing Linux workflows. This has created some issues due to diverging test environments, where the same set of tests pass in the downstream, but fails in the upstream due to dependency conflicts and environment setup, etc. This happens because XLA testing was piggybacking on the existing ciflow (config) and image that were mainly built for PyTorch tests. To avoid this and do so unobtrusively, it would be good to create a dedicated CI workflow (customize settings without affecting other tests) for XLA testing and use the XLA test image (customize dependencies and settings without affecting other tests) that's used in the downstream tests.
Ideally, any failures in the upstream should have been caught in the downstream while testing; we want to avoid XLA upstream test failures due to test environment setup/divergence, if possible.
### Alternatives
This proposes to use the same test image we use in the downstream (XLA) circle CI tests. We maintain and publishes the image to serve XLA test needs; the test requirement change will not affect or require the upstream testing to provision/modify a new image.
Alternatively, we can provision a new upstream testing image and modify/re-provision when the XLA testing requirements change. But testing a newly provisioned image that is similar, but not identical to the one used in the downstream might run into subtle issues.
### Additional context
https://github.com/pytorch/xla/issues/3159
https://github.com/pytorch/xla/issues/3351
cc @bdhirsh
| cc @malfet @seemethere for visibility.
@yeounoh thank you for your suggestions. If you have PR in mind that would accomplish that, please do not hesitate to add me or @seemethere as the reviewers.
Hi @malfet , here is the [PR](https://github.com/pytorch/pytorch/pull/72496) that addresses/tests the idea. I can't seem to link the PR directly to the issue, nor add the reviewers myself. It would be really great if I could get the write permission. Thank you! | 2022-02-16T20:38:26 |
pytorch/pytorch | 78,810 | pytorch__pytorch-78810 | [
"78549"
] | aa8911885b9665bd028e0784af46805dc42e2f3d | diff --git a/torch/nn/modules/_functions.py b/torch/nn/modules/_functions.py
--- a/torch/nn/modules/_functions.py
+++ b/torch/nn/modules/_functions.py
@@ -67,11 +67,19 @@ def forward(self, input, weight, bias, running_mean, running_var, eps, momentum,
# world_size * (2C + 1) -> world_size * C, world_size * C, world_size * 1
mean_all, invstd_all, count_all = torch.split(combined, num_channels, dim=1)
- # remove stats from empty inputs
- mask = count_all.squeeze(-1) >= 1
- count_all = count_all[mask]
- mean_all = mean_all[mask]
- invstd_all = invstd_all[mask]
+ if not torch.cuda.is_current_stream_capturing():
+ # The lines below force a synchronization between CUDA and CPU, because
+ # the shape of the result count_all depends on the values in mask tensor.
+ # Such synchronizations break CUDA Graph capturing.
+ # See https://github.com/pytorch/pytorch/issues/78549
+ # FIXME: https://github.com/pytorch/pytorch/issues/78656 describes
+ # a better longer-term solution.
+
+ # remove stats from empty inputs
+ mask = count_all.squeeze(-1) >= 1
+ count_all = count_all[mask]
+ mean_all = mean_all[mask]
+ invstd_all = invstd_all[mask]
# calculate global mean & invstd
mean, invstd = torch.batch_norm_gather_stats_with_counts(
| https://github.com/pytorch/pytorch/pull/74944 breaks CUDA graph capture
### π Describe the bug
Prior to https://github.com/pytorch/pytorch/pull/74944, pytorch's native sync batch norm was safe to use with CUDA graph capture.
After https://github.com/pytorch/pytorch/pull/74944, Thorsten Kurth (@azrael417, one of our engineers using Pytorch for mlperf models) found that graph-capturing syncbn fails:
```
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/batchnorm.py", line 748, in forward
return sync_batch_norm.apply(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/_functions.py", line 72, in forward
count_all = count_all[mask]
RuntimeError: CUDA error: operation not permitted when stream is capturing
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from memcpy_and_sync at /opt/pytorch/pytorch/c10/cuda/CUDAFunctions.h:76 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x6c (0x7fbcd6293cec in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: void at::native::nonzero_cuda_out_impl<bool>(at::Tensor const&, at::Tensor&) + 0x1136 (0x7fbcd8239286 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #2: at::native::nonzero_out_cuda(at::Tensor const&, at::Tensor&) + 0x607 (0x7fbcd821ec97 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #3: at::native::nonzero_cuda(at::Tensor const&) + 0x252 (0x7fbcd821f102 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #4: <unknown function> + 0x2d2ebfd (0x7fbcd9053bfd in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #5: <unknown function> + 0x2d2ec85 (0x7fbcd9053c85 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #6: at::_ops::nonzero::call(at::Tensor const&) + 0x139 (0x7fbd029d4a09 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x1421904 (0x7fbd02651904 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #8: at::native::index(at::Tensor const&, c10::List<c10::optional<at::Tensor> > const&) + 0x8c (0x7fbd02652d1c in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #9: <unknown function> + 0x2d1ec29 (0x7fbcd9043c29 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #10: <unknown function> + 0x2d1ecb8 (0x7fbcd9043cb8 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #11: at::_ops::index_Tensor::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::List<c10::optional<at::Tensor> > const&) + 0x88 (0x7fbd02ac4b08 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x2cac7e0 (0x7fbd03edc7e0 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x2cacf5b (0x7fbd03edcf5b in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #14: at::_ops::index_Tensor::call(at::Tensor const&, c10::List<c10::optional<at::Tensor> > const&) + 0x27e (0x7fbd02b1b0ae in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #15: torch::autograd::THPVariable_getitem(_object*, _object*) + 0x51d (0x7fbd0af8b7ed in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #16: _PyEval_EvalFrameDefault + 0xacd (0x5622e7e261ed in /opt/conda/bin/python)
...python frames
```
We believe the failure is a CPU sync incurred by [`count_all = count_all[mask]`](https://github.com/pytorch/pytorch/pull/74944/files#diff-6083dbd5d169b44f41c2fd5e9638c88d246aff4cd86d2a839c72972ed198546bR72).
How hard would it be to remove the sync, or at least reenable the old syncfree code if the user wants graph capture?
cc @ezyang @gchanan @zou3519 @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @rohan-varma @gqchen @aazzolini @osalpekar @jiayisuse @SciPioneer @H-Huang @kwen2501 @mcarilli
| Synced with @mcarilli offline, for v1.12, we are going to use [is_current_stream_capturing](https://pytorch.org/docs/master/generated/torch.cuda.is_current_stream_capturing.html?highlight=is_current_stream_capturing#torch.cuda.is_current_stream_capturing) to guard the code below as a temporary fix.
https://github.com/pytorch/pytorch/blob/cd4ffc865b337f3e0f51d6e757ac2a2ab83e9f20/torch/nn/modules/_functions.py#L70-L74
For longer term, we might need to update the CUDA kernel for `batch_norm_gather_stats_with_counts` to deal with 0 counts.
cc @datumbox
Created #78656 to track longer-term solution | 2022-06-03T14:47:56 |
|
pytorch/pytorch | 78,948 | pytorch__pytorch-78948 | [
"78263"
] | 2652da29ab6c0d690bfb543bee958f50c0b86451 | diff --git a/torch/utils/data/datapipes/iter/filelister.py b/torch/utils/data/datapipes/iter/filelister.py
--- a/torch/utils/data/datapipes/iter/filelister.py
+++ b/torch/utils/data/datapipes/iter/filelister.py
@@ -1,5 +1,8 @@
from typing import Iterator, List, Sequence, Union
+
+from torch.utils.data.datapipes._decorator import functional_datapipe
+
from torch.utils.data.datapipes.datapipe import IterDataPipe
from torch.utils.data.datapipes.iter import IterableWrapper
from torch.utils.data.datapipes.utils.common import get_file_pathnames_from_root
@@ -7,6 +10,7 @@
__all__ = ["FileListerIterDataPipe", ]
+@functional_datapipe("list_files")
class FileListerIterDataPipe(IterDataPipe[str]):
r"""
Given path(s) to the root directory, yields file pathname(s) (path + filename) of files within the root directory.
| diff --git a/test/test_datapipe.py b/test/test_datapipe.py
--- a/test/test_datapipe.py
+++ b/test/test_datapipe.py
@@ -300,6 +300,14 @@ def test_listdirfiles_iterable_datapipe(self):
self.assertTrue(pathname in self.temp_files)
self.assertEqual(count, 2 * len(self.temp_files))
+ # test functional API
+ datapipe = datapipe.list_files()
+ count = 0
+ for pathname in datapipe:
+ count += 1
+ self.assertTrue(pathname in self.temp_files)
+ self.assertEqual(count, 2 * len(self.temp_files))
+
def test_listdirfilesdeterministic_iterable_datapipe(self):
temp_dir = self.temp_dir.name
| Functional API for FileLister
### π The feature, motivation and pitch
Similar to https://github.com/pytorch/data/issues/387
This allows for
```python
IterableWrapper([...]).list_file()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @VitalyFedyunin @ejguan @NivekT
| 2022-06-06T17:33:11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.