text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.Tensor.nansum
Tensor.nansum(dim=None, keepdim=False, dtype=None) -> Tensor
See "torch.nansum()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nansum.html | pytorch docs |
torch.Tensor.unbind
Tensor.unbind(dim=0) -> seq
See "torch.unbind()" | https://pytorch.org/docs/stable/generated/torch.Tensor.unbind.html | pytorch docs |
torch.isclose
torch.isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor
Returns a new tensor with boolean elements representing if each
element of "input" is "close" to the corresponding element of
"other". Closeness is defined as:
\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} +
\texttt{rtol} \times \lvert \text{other} \rvert
where "input" and "other" are finite. Where "input" and/or "other"
are nonfinite they are close if and only if they are equal, with
NaNs being considered equal to each other when "equal_nan" is True.
Parameters:
* input (Tensor) -- first tensor to compare
* **other** (*Tensor*) -- second tensor to compare
* **atol** (*float**, **optional*) -- absolute tolerance.
Default: 1e-08
* **rtol** (*float**, **optional*) -- relative tolerance.
Default: 1e-05
* **equal_nan** (*bool**, **optional*) -- if "True", then two
| https://pytorch.org/docs/stable/generated/torch.isclose.html | pytorch docs |
"NaN" s will be considered equal. Default: "False"
Examples:
>>> torch.isclose(torch.tensor((1., 2, 3)), torch.tensor((1 + 1e-10, 3, 4)))
tensor([ True, False, False])
>>> torch.isclose(torch.tensor((float('inf'), 4)), torch.tensor((float('inf'), 6)), rtol=.5)
tensor([True, True])
| https://pytorch.org/docs/stable/generated/torch.isclose.html | pytorch docs |
torch.nn.functional.kl_div
torch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False)
The Kullback-Leibler divergence Loss
See "KLDivLoss" for details.
Parameters:
* input (Tensor) -- Tensor of arbitrary shape in log-
probabilities.
* **target** (*Tensor*) -- Tensor of the same shape as input.
See "log_target" for the target's interpretation.
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
multiple elements per sample. If the field "size_average" is
set to "False", the losses are instead summed for each
minibatch. Ignored when reduce is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
| https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html | pytorch docs |
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'batchmean'" | "'sum'" |
"'mean'". "'none'": no reduction will be applied
"'batchmean'": the sum of the output will be divided by the
batchsize "'sum'": the output will be summed "'mean'": the
output will be divided by the number of elements in the output
Default: "'mean'"
* **log_target** (*bool*) -- A flag indicating whether "target"
is passed in the log space. It is recommended to pass certain
distributions (like "softmax") in the log space to avoid
numerical issues caused by explicit "log". Default: "False"
Return type:
Tensor
Note:
"size_average" and "reduce" are in the process of being
| https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html | pytorch docs |
deprecated, and in the meantime, specifying either of those two
args will override "reduction".
Note:
"reduction" = "'mean'" doesn't return the true kl divergence
value, please use "reduction" = "'batchmean'" which aligns with
KL math definition. In the next major release, "'mean'" will be
changed to be the same as 'batchmean'.
| https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html | pytorch docs |
torch.ravel
torch.ravel(input) -> Tensor
Return a contiguous flattened tensor. A copy is made only if
needed.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> t = torch.tensor([[[1, 2],
... [3, 4]],
... [[5, 6],
... [7, 8]]])
>>> torch.ravel(t)
tensor([1, 2, 3, 4, 5, 6, 7, 8])
| https://pytorch.org/docs/stable/generated/torch.ravel.html | pytorch docs |
torch.get_default_dtype
torch.get_default_dtype() -> torch.dtype
Get the current default floating point "torch.dtype".
Example:
>>> torch.get_default_dtype() # initial default for floating point is torch.float32
torch.float32
>>> torch.set_default_dtype(torch.float64)
>>> torch.get_default_dtype() # default is now changed to torch.float64
torch.float64
>>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this
>>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor
torch.float32
| https://pytorch.org/docs/stable/generated/torch.get_default_dtype.html | pytorch docs |
torch.autograd.backward
torch.autograd.backward(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None, inputs=None)
Computes the sum of gradients of given tensors with respect to
graph leaves.
The graph is differentiated using the chain rule. If any of
"tensors" are non-scalar (i.e. their data has more than one
element) and require gradient, then the Jacobian-vector product
would be computed, in this case the function additionally requires
specifying "grad_tensors". It should be a sequence of matching
length, that contains the "vector" in the Jacobian-vector product,
usually the gradient of the differentiated function w.r.t.
corresponding tensors ("None" is an acceptable value for all
tensors that don't need gradient tensors).
This function accumulates gradients in the leaves - you might need
to zero ".grad" attributes or set them to "None" before calling it. | https://pytorch.org/docs/stable/generated/torch.autograd.backward.html | pytorch docs |
See Default gradient layouts for details on the memory layout of
accumulated gradients.
Note:
Using this method with "create_graph=True" will create a
reference cycle between the parameter and its gradient which can
cause a memory leak. We recommend using "autograd.grad" when
creating the graph to avoid this. If you have to use this
function, make sure to reset the ".grad" fields of your
parameters to "None" after use to break the cycle and avoid the
leak.
Note:
If you run any forward ops, create "grad_tensors", and/or call
"backward" in a user-specified CUDA stream context, see Stream
semantics of backward passes.
Note:
When "inputs" are provided and a given input is not a leaf, the
current implementation will call its grad_fn (even though it is
not strictly needed to get this gradients). It is an
implementation detail on which the user should not rely. See htt
| https://pytorch.org/docs/stable/generated/torch.autograd.backward.html | pytorch docs |
ps://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780
for more details.
Parameters:
* tensors (Sequence[Tensor] or Tensor) -- Tensors
of which the derivative will be computed.
* **grad_tensors** (*Sequence**[**Tensor** or **None**] or
**Tensor**, **optional*) -- The "vector" in the Jacobian-
vector product, usually gradients w.r.t. each element of
corresponding tensors. None values can be specified for scalar
Tensors or ones that don't require grad. If a None value would
be acceptable for all grad_tensors, then this argument is
optional.
* **retain_graph** (*bool**, **optional*) -- If "False", the
graph used to compute the grad will be freed. Note that in
nearly all cases setting this option to "True" is not needed
and often can be worked around in a much more efficient way.
Defaults to the value of "create_graph".
| https://pytorch.org/docs/stable/generated/torch.autograd.backward.html | pytorch docs |
Defaults to the value of "create_graph".
* **create_graph** (*bool**, **optional*) -- If "True", graph of
the derivative will be constructed, allowing to compute higher
order derivative products. Defaults to "False".
* **inputs** (*Sequence**[**Tensor**] or **Tensor**,
**optional*) -- Inputs w.r.t. which the gradient be will
accumulated into ".grad". All other Tensors will be ignored.
If not provided, the gradient is accumulated into all the leaf
Tensors that were used to compute the attr::tensors.
| https://pytorch.org/docs/stable/generated/torch.autograd.backward.html | pytorch docs |
torch.geqrf
torch.geqrf(input, *, out=None)
This is a low-level function for calling LAPACK's geqrf directly.
This function returns a namedtuple (a, tau) as defined in LAPACK
documentation for geqrf .
Computes a QR decomposition of "input". Both Q and R matrices
are stored in the same output tensor a. The elements of R are
stored on and above the diagonal. Elementary reflectors (or
Householder vectors) implicitly defining matrix Q are stored
below the diagonal. The results of this function can be used
together with "torch.linalg.householder_product()" to obtain the
Q matrix or with "torch.ormqr()", which uses an implicit
representation of the Q matrix, for an efficient matrix-matrix
multiplication.
See LAPACK documentation for geqrf for further details.
Note:
See also "torch.linalg.qr()", which computes Q and R matrices,
and "torch.linalg.lstsq()" with the "driver="gels"" option for a
| https://pytorch.org/docs/stable/generated/torch.geqrf.html | pytorch docs |
function that can solve matrix equations using a QR
decomposition.
Parameters:
input (Tensor) -- the input matrix
Keyword Arguments:
out (tuple, optional) -- the output tuple of (Tensor,
Tensor). Ignored if None. Default: None. | https://pytorch.org/docs/stable/generated/torch.geqrf.html | pytorch docs |
torch.autograd.Function.backward
static Function.backward(ctx, *grad_outputs)
Defines a formula for differentiating the operation with backward
mode automatic differentiation (alias to the vjp function).
This function is to be overridden by all subclasses.
It must accept a context "ctx" as the first argument, followed by
as many outputs as the "forward()" returned (None will be passed in
for non tensor outputs of the forward function), and it should
return as many tensors, as there were inputs to "forward()". Each
argument is the gradient w.r.t the given output, and each returned
value should be the gradient w.r.t. the corresponding input. If an
input is not a Tensor or is a Tensor not requiring grads, you can
just pass None as a gradient for that input.
The context can be used to retrieve tensors saved during the
forward pass. It also has an attribute "ctx.needs_input_grad" as a | https://pytorch.org/docs/stable/generated/torch.autograd.Function.backward.html | pytorch docs |
tuple of booleans representing whether each input needs gradient.
E.g., "backward()" will have "ctx.needs_input_grad[0] = True" if
the first input to "forward()" needs gradient computated w.r.t. the
output.
Return type:
Any | https://pytorch.org/docs/stable/generated/torch.autograd.Function.backward.html | pytorch docs |
torch.Tensor.isneginf
Tensor.isneginf() -> Tensor
See "torch.isneginf()" | https://pytorch.org/docs/stable/generated/torch.Tensor.isneginf.html | pytorch docs |
torch.cumprod
torch.cumprod(input, dim, *, dtype=None, out=None) -> Tensor
Returns the cumulative product of elements of "input" in the
dimension "dim".
For example, if "input" is a vector of size N, the result will also
be a vector of size N, with elements.
y_i = x_1 \times x_2\times x_3\times \dots \times x_i
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to do the operation over
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is casted
to "dtype" before the operation is performed. This is useful
for preventing data type overflows. Default: None.
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> a = torch.randn(10)
>>> a
tensor([ 0.6001, 0.2069, -0.1919, 0.9792, 0.6727, 1.0062, 0.4126,
-0.2129, -0.4206, 0.1968])
| https://pytorch.org/docs/stable/generated/torch.cumprod.html | pytorch docs |
-0.2129, -0.4206, 0.1968])
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065,
0.0014, -0.0006, -0.0001])
>>> a[5] = 0.0
>>> torch.cumprod(a, dim=0)
tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,
0.0000, -0.0000, -0.0000])
| https://pytorch.org/docs/stable/generated/torch.cumprod.html | pytorch docs |
torch.Tensor.diag
Tensor.diag(diagonal=0) -> Tensor
See "torch.diag()" | https://pytorch.org/docs/stable/generated/torch.Tensor.diag.html | pytorch docs |
torch.rsqrt
torch.rsqrt(input, *, out=None) -> Tensor
Returns a new tensor with the reciprocal of the square-root of each
of the elements of "input".
\text{out}_{i} = \frac{1}{\sqrt{\text{input}_{i}}}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.0370, 0.2970, 1.5420, -0.9105])
>>> torch.rsqrt(a)
tensor([ nan, 1.8351, 0.8053, nan])
| https://pytorch.org/docs/stable/generated/torch.rsqrt.html | pytorch docs |
torch.dstack
torch.dstack(tensors, *, out=None) -> Tensor
Stack tensors in sequence depthwise (along third axis).
This is equivalent to concatenation along the third axis after 1-D
and 2-D tensors have been reshaped by "torch.atleast_3d()".
Parameters:
tensors (sequence of Tensors) -- sequence of tensors to
concatenate
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([1, 2, 3])
>>> b = torch.tensor([4, 5, 6])
>>> torch.dstack((a,b))
tensor([[[1, 4],
[2, 5],
[3, 6]]])
>>> a = torch.tensor([[1],[2],[3]])
>>> b = torch.tensor([[4],[5],[6]])
>>> torch.dstack((a,b))
tensor([[[1, 4]],
[[2, 5]],
[[3, 6]]])
| https://pytorch.org/docs/stable/generated/torch.dstack.html | pytorch docs |
torch.Tensor.tan_
Tensor.tan_() -> Tensor
In-place version of "tan()" | https://pytorch.org/docs/stable/generated/torch.Tensor.tan_.html | pytorch docs |
torch.Tensor.sub
Tensor.sub(other, *, alpha=1) -> Tensor
See "torch.sub()". | https://pytorch.org/docs/stable/generated/torch.Tensor.sub.html | pytorch docs |
torch._foreach_tan
torch._foreach_tan(self: List[Tensor]) -> List[Tensor]
Apply "torch.tan()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_tan.html | pytorch docs |
torch.dist
torch.dist(input, other, p=2) -> Tensor
Returns the p-norm of ("input" - "other")
The shapes of "input" and "other" must be broadcastable.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the Right-hand-side input tensor
* **p** (*float**, **optional*) -- the norm to be computed
Example:
>>> x = torch.randn(4)
>>> x
tensor([-1.5393, -0.8675, 0.5916, 1.6321])
>>> y = torch.randn(4)
>>> y
tensor([ 0.0967, -1.0511, 0.6295, 0.8360])
>>> torch.dist(x, y, 3.5)
tensor(1.6727)
>>> torch.dist(x, y, 3)
tensor(1.6973)
>>> torch.dist(x, y, 0)
tensor(4.)
>>> torch.dist(x, y, 1)
tensor(2.6537)
| https://pytorch.org/docs/stable/generated/torch.dist.html | pytorch docs |
torch.func.vjp
torch.func.vjp(func, *primals, has_aux=False)
Standing for the vector-Jacobian product, returns a tuple
containing the results of "func" applied to "primals" and a
function that, when given "cotangents", computes the reverse-mode
Jacobian of "func" with respect to "primals" times "cotangents".
Parameters:
* func (Callable) -- A Python function that takes one or
more arguments. Must return one or more Tensors.
* **primals** (*Tensors*) -- Positional arguments to "func" that
must all be Tensors. The returned function will also be
computing the derivative with respect to these arguments
* **has_aux** (*bool*) -- Flag indicating that "func" returns a
"(output, aux)" tuple where the first element is the output of
the function to be differentiated and the second element is
other auxiliary objects that will not be differentiated.
Default: False.
Returns: | https://pytorch.org/docs/stable/generated/torch.func.vjp.html | pytorch docs |
Default: False.
Returns:
Returns a "(output, vjp_fn)" tuple containing the output of
"func" applied to "primals" and a function that computes the vjp
of "func" with respect to all "primals" using the cotangents
passed to the returned function. If "has_aux is True", then
instead returns a "(output, vjp_fn, aux)" tuple. The returned
"vjp_fn" function will return a tuple of each VJP.
When used in simple cases, "vjp()" behaves the same as "grad()"
x = torch.randn([5])
f = lambda x: x.sin().sum()
(_, vjpfunc) = torch.func.vjp(f, x)
grad = vjpfunc(torch.tensor(1.))[0]
assert torch.allclose(grad, torch.func.grad(f)(x))
However, "vjp()" can support functions with multiple outputs by
passing in the cotangents for each of the outputs
x = torch.randn([5])
f = lambda x: (x.sin(), x.cos())
(_, vjpfunc) = torch.func.vjp(f, x)
vjps = vjpfunc((torch.ones([5]), torch.ones([5])))
| https://pytorch.org/docs/stable/generated/torch.func.vjp.html | pytorch docs |
assert torch.allclose(vjps[0], x.cos() + -x.sin())
"vjp()" can even support outputs being Python structs
x = torch.randn([5])
f = lambda x: {'first': x.sin(), 'second': x.cos()}
(_, vjpfunc) = torch.func.vjp(f, x)
cotangents = {'first': torch.ones([5]), 'second': torch.ones([5])}
vjps = vjpfunc(cotangents)
assert torch.allclose(vjps[0], x.cos() + -x.sin())
The function returned by "vjp()" will compute the partials with
respect to each of the "primals"
x, y = torch.randn([5, 4]), torch.randn([4, 5])
(_, vjpfunc) = torch.func.vjp(torch.matmul, x, y)
cotangents = torch.randn([5, 5])
vjps = vjpfunc(cotangents)
assert len(vjps) == 2
assert torch.allclose(vjps[0], torch.matmul(cotangents, y.transpose(0, 1)))
assert torch.allclose(vjps[1], torch.matmul(x.transpose(0, 1), cotangents))
"primals" are the positional arguments for "f". All kwargs use
their default value
x = torch.randn([5])
| https://pytorch.org/docs/stable/generated/torch.func.vjp.html | pytorch docs |
x = torch.randn([5])
def f(x, scale=4.):
return x * scale
(_, vjpfunc) = torch.func.vjp(f, x)
vjps = vjpfunc(torch.ones_like(x))
assert torch.allclose(vjps[0], torch.full(x.shape, 4.))
Note:
Using PyTorch "torch.no_grad" together with "vjp". Case 1: Using
"torch.no_grad" inside a function:
>>> def f(x):
>>> with torch.no_grad():
>>> c = x ** 2
>>> return x - c
In this case, "vjp(f)(x)" will respect the inner
"torch.no_grad".Case 2: Using "vjp" inside "torch.no_grad"
context manager:
>>> with torch.no_grad():
>>> vjp(f)(x)
In this case, "vjp" will respect the inner "torch.no_grad", but
not the outer one. This is because "vjp" is a "function
transform": its result should not depend on the result of a
context manager outside of "f".
| https://pytorch.org/docs/stable/generated/torch.func.vjp.html | pytorch docs |
Tanhshrink
class torch.nn.Tanhshrink
Applies the element-wise function:
\text{Tanhshrink}(x) = x - \tanh(x)
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Tanhshrink()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Tanhshrink.html | pytorch docs |
torch.Tensor.arccos
Tensor.arccos() -> Tensor
See "torch.arccos()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arccos.html | pytorch docs |
torch.Tensor.row_indices
Tensor.row_indices() | https://pytorch.org/docs/stable/generated/torch.Tensor.row_indices.html | pytorch docs |
Linear
class torch.ao.nn.qat.dynamic.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)
A linear module attached with FakeQuantize modules for weight, used
for dynamic quantization aware training.
We adopt the same interface as torch.nn.Linear, please see
https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for
documentation.
Similar to torch.nn.Linear, with FakeQuantize modules initialized
to default. | https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.dynamic.Linear.html | pytorch docs |
MaxUnpool2d
class torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0)
Computes a partial inverse of "MaxPool2d".
"MaxPool2d" is not fully invertible, since the non-maximal values
are lost.
"MaxUnpool2d" takes in as input the output of "MaxPool2d" including
the indices of the maximal values and computes a partial inverse in
which all non-maximal values are set to zero.
Note:
"MaxPool2d" can map several input sizes to the same output sizes.
Hence, the inversion process can get ambiguous. To accommodate
this, you can provide the needed output size as an additional
argument "output_size" in the forward call. See the Inputs and
Example below.
Parameters:
* kernel_size (int or tuple) -- Size of the max
pooling window.
* **stride** (*int** or **tuple*) -- Stride of the max pooling
window. It is set to "kernel_size" by default.
* **padding** (*int** or **tuple*) -- Padding that was added to
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html | pytorch docs |
the input
Inputs:
* input: the input Tensor to invert
* *indices*: the indices given out by "MaxPool2d"
* *output_size* (optional): the targeted output size
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where
H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times
\text{padding[0]} + \text{kernel\_size[0]}
W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times
\text{padding[1]} + \text{kernel\_size[1]}
or as given by "output_size" in the call operator
Example:
>>> pool = nn.MaxPool2d(2, stride=2, return_indices=True)
>>> unpool = nn.MaxUnpool2d(2, stride=2)
>>> input = torch.tensor([[[[ 1., 2., 3., 4.],
[ 5., 6., 7., 8.],
[ 9., 10., 11., 12.],
[13., 14., 15., 16.]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html | pytorch docs |
output, indices = pool(input)
>>> unpool(output, indices)
tensor([[[[ 0., 0., 0., 0.],
[ 0., 6., 0., 8.],
[ 0., 0., 0., 0.],
[ 0., 14., 0., 16.]]]])
>>> # Now using output_size to resolve an ambiguous size for the inverse
>>> input = torch.torch.tensor([[[[ 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10.],
[11., 12., 13., 14., 15.],
[16., 17., 18., 19., 20.]]]])
>>> output, indices = pool(input)
>>> # This call will not work without specifying output_size
>>> unpool(output, indices, output_size=input.size())
tensor([[[[ 0., 0., 0., 0., 0.],
[ 0., 7., 0., 9., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 17., 0., 19., 0.]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html | pytorch docs |
LSTMCell
class torch.ao.nn.quantized.dynamic.LSTMCell(args, *kwargs)
A long short-term memory (LSTM) cell.
A dynamic quantized LSTMCell module with floating point tensor as
inputs and outputs. Weights are quantized to 8 bits. We adopt the
same interface as torch.nn.LSTMCell, please see
https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell for
documentation.
Examples:
>>> rnn = nn.LSTMCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> cx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
... hx, cx = rnn(input[i], (hx, cx))
... output.append(hx)
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.LSTMCell.html | pytorch docs |
torch.sparse.sum
torch.sparse.sum(input, dim=None, dtype=None)
Returns the sum of each row of the sparse tensor "input" in the
given dimensions "dim". If "dim" is a list of dimensions, reduce
over all of them. When sum over all "sparse_dim", this method
returns a dense tensor instead of a sparse tensor.
All summed "dim" are squeezed (see "torch.squeeze()"), resulting an
output tensor having "dim" fewer dimensions than "input".
During backward, only gradients at "nnz" locations of "input" will
propagate back. Note that the gradients of "input" is coalesced.
Parameters:
* input (Tensor) -- the input sparse tensor
* **dim** (*int** or **tuple of ints*) -- a dimension or a list
of dimensions to reduce. Default: reduce over all dims.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned Tensor. Default: dtype of "input".
Return type:
Tensor
Example:
>>> nnz = 3
| https://pytorch.org/docs/stable/generated/torch.sparse.sum.html | pytorch docs |
Tensor
Example:
>>> nnz = 3
>>> dims = [5, 5, 2, 3]
>>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),
torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)
>>> V = torch.randn(nnz, dims[2], dims[3])
>>> size = torch.Size(dims)
>>> S = torch.sparse_coo_tensor(I, V, size)
>>> S
tensor(indices=tensor([[2, 0, 3],
[2, 4, 1]]),
values=tensor([[[-0.6438, -1.6467, 1.4004],
[ 0.3411, 0.0918, -0.2312]],
[[ 0.5348, 0.0634, -2.0494],
[-0.7125, -1.0646, 2.1844]],
[[ 0.1276, 0.1874, -0.6334],
[-1.9682, -0.5340, 0.7483]]]),
size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo)
# when sum over only part of sparse_dims, return a sparse tensor
>>> torch.sparse.sum(S, [1, 3])
tensor(indices=tensor([[0, 2, 3]]),
| https://pytorch.org/docs/stable/generated/torch.sparse.sum.html | pytorch docs |
tensor(indices=tensor([[0, 2, 3]]),
values=tensor([[-1.4512, 0.4073],
[-0.8901, 0.2017],
[-0.3183, -1.7539]]),
size=(5, 2), nnz=3, layout=torch.sparse_coo)
# when sum over all sparse dim, return a dense tensor
# with summed dims squeezed
>>> torch.sparse.sum(S, [0, 1, 3])
tensor([-2.6596, -1.1450])
| https://pytorch.org/docs/stable/generated/torch.sparse.sum.html | pytorch docs |
torch.remainder
torch.remainder(input, other, *, out=None) -> Tensor
Computes Python's modulus operation entrywise. The result has the
same sign as the divisor "other" and its absolute value is less
than that of "other".
It may also be defined in terms of "torch.div()" as
torch.remainder(a, b) == a - a.div(b, rounding_mode="floor") * b
Supports broadcasting to a common shape, type promotion, and
integer and float inputs.
Note:
Complex inputs are not supported. In some cases, it is not
mathematically possible to satisfy the definition of a modulo
operation with complex numbers. See "torch.fmod()" for how
division by zero is handled.
See also:
"torch.fmod()" which implements C++'s std::fmod. This one is
defined in terms of division rounding towards zero.
Parameters:
* input (Tensor or Scalar) -- the dividend
* **other** (*Tensor** or **Scalar*) -- the divisor
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.remainder.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)
tensor([ 1., 0., 1., 1., 0., 1.])
>>> torch.remainder(torch.tensor([1, 2, 3, 4, 5]), -1.5)
tensor([ -0.5000, -1.0000, 0.0000, -0.5000, -1.0000 ])
| https://pytorch.org/docs/stable/generated/torch.remainder.html | pytorch docs |
torch.moveaxis
torch.moveaxis(input, source, destination) -> Tensor
Alias for "torch.movedim()".
This function is equivalent to NumPy's moveaxis function.
Examples:
>>> t = torch.randn(3,2,1)
>>> t
tensor([[[-0.3362],
[-0.8437]],
[[-0.9627],
[ 0.1727]],
[[ 0.5173],
[-0.1398]]])
>>> torch.moveaxis(t, 1, 0).shape
torch.Size([2, 3, 1])
>>> torch.moveaxis(t, 1, 0)
tensor([[[-0.3362],
[-0.9627],
[ 0.5173]],
[[-0.8437],
[ 0.1727],
[-0.1398]]])
>>> torch.moveaxis(t, (1, 2), (0, 1)).shape
torch.Size([2, 1, 3])
>>> torch.moveaxis(t, (1, 2), (0, 1))
tensor([[[-0.3362, -0.9627, 0.5173]],
[[-0.8437, 0.1727, -0.1398]]])
| https://pytorch.org/docs/stable/generated/torch.moveaxis.html | pytorch docs |
torch.ormqr
torch.ormqr(input, tau, other, left=True, transpose=False, *, out=None) -> Tensor
Computes the matrix-matrix multiplication of a product of
Householder matrices with a general matrix.
Multiplies a m \times n matrix C (given by "other") with a matrix
Q, where Q is represented using Householder reflectors (input,
tau). See Representation of Orthogonal or Unitary Matrices for
further details.
If "left" is True then op(Q) times C is computed, otherwise
the result is C times op(Q). When "left" is True, the
implicit matrix Q has size m \times m. It has size n \times n
otherwise. If "transpose" is True then op is the conjugate
transpose operation, otherwise it's a no-op.
Supports inputs of float, double, cfloat and cdouble dtypes. Also
supports batched inputs, and, if the input is batched, the output
is batched with the same dimensions.
See also:
"torch.geqrf()" can be used to form the Householder
| https://pytorch.org/docs/stable/generated/torch.ormqr.html | pytorch docs |
representation (input, tau) of matrix Q from the QR
decomposition.
Note:
This function supports backward but it is only fast when "(input,
tau)" do not require gradients and/or "tau.size(-1)" is very
small. ``
Parameters:
* input (Tensor) -- tensor of shape (, mn, k) where ***
is zero or more batch dimensions and mn equals to m or n*
depending on the "left".
* **tau** (*Tensor*) -- tensor of shape *(*, min(mn, k))* where
*** is zero or more batch dimensions.
* **other** (*Tensor*) -- tensor of shape *(*, m, n)* where ***
is zero or more batch dimensions.
* **left** (*bool*) -- controls the order of multiplication.
* **transpose** (*bool*) -- controls whether the matrix *Q* is
conjugate transposed or not.
Keyword Arguments:
out (Tensor, optional) -- the output Tensor. Ignored
if None. Default: None. | https://pytorch.org/docs/stable/generated/torch.ormqr.html | pytorch docs |
torch.cuda.set_sync_debug_mode
torch.cuda.set_sync_debug_mode(debug_mode)
Sets the debug mode for cuda synchronizing operations.
Parameters:
debug_mode (str or int) -- if "default" or 0, don't
error or warn on synchronizing operations, if "warn" or 1, warn
on synchronizing operations, if "error" or 2, error out
synchronizing operations.
Warning:
This is an experimental feature, and not all synchronizing
operations will trigger warning or error. In particular,
operations in torch.distributed and torch.sparse namespaces are
not covered yet.
| https://pytorch.org/docs/stable/generated/torch.cuda.set_sync_debug_mode.html | pytorch docs |
torch.log10
torch.log10(input, *, out=None) -> Tensor
Returns a new tensor with the logarithm to the base 10 of the
elements of "input".
y_{i} = \log_{10} (x_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.rand(5)
>>> a
tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251])
>>> torch.log10(a)
tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476])
| https://pytorch.org/docs/stable/generated/torch.log10.html | pytorch docs |
torch.flatten
torch.flatten(input, start_dim=0, end_dim=- 1) -> Tensor
Flattens "input" by reshaping it into a one-dimensional tensor. If
"start_dim" or "end_dim" are passed, only dimensions starting with
"start_dim" and ending with "end_dim" are flattened. The order of
elements in "input" is unchanged.
Unlike NumPy's flatten, which always copies input's data, this
function may return the original object, a view, or copy. If no
dimensions are flattened, then the original object "input" is
returned. Otherwise, if input can be viewed as the flattened shape,
then that view is returned. Finally, only if the input cannot be
viewed as the flattened shape is input's data copied. See
"torch.Tensor.view()" for details on when a view will be returned.
Note:
Flattening a zero-dimensional tensor will return a one-
dimensional view.
Parameters:
* input (Tensor) -- the input tensor.
* **start_dim** (*int*) -- the first dim to flatten
| https://pytorch.org/docs/stable/generated/torch.flatten.html | pytorch docs |
end_dim (int) -- the last dim to flatten
Example:
>>> t = torch.tensor([[[1, 2],
... [3, 4]],
... [[5, 6],
... [7, 8]]])
>>> torch.flatten(t)
tensor([1, 2, 3, 4, 5, 6, 7, 8])
>>> torch.flatten(t, start_dim=1)
tensor([[1, 2, 3, 4],
[5, 6, 7, 8]])
| https://pytorch.org/docs/stable/generated/torch.flatten.html | pytorch docs |
torch.Tensor.indices
Tensor.indices() -> Tensor
Return the indices tensor of a sparse COO tensor.
Warning:
Throws an error if "self" is not a sparse COO tensor.
See also "Tensor.values()".
Note:
This method can only be called on a coalesced sparse tensor. See
"Tensor.coalesce()" for details.
| https://pytorch.org/docs/stable/generated/torch.Tensor.indices.html | pytorch docs |
torch.Tensor.erfc_
Tensor.erfc_() -> Tensor
In-place version of "erfc()" | https://pytorch.org/docs/stable/generated/torch.Tensor.erfc_.html | pytorch docs |
torch.autograd.profiler.profile.self_cpu_time_total
property profile.self_cpu_time_total
Returns total time spent on CPU obtained as a sum of all self times
across all the events. | https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.self_cpu_time_total.html | pytorch docs |
torch.nn.utils.clip_grad_norm_
torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0, error_if_nonfinite=False, foreach=None)
Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were
concatenated into a single vector. Gradients are modified in-place.
Parameters:
* parameters (Iterable[Tensor] or Tensor) -- an
iterable of Tensors or a single Tensor that will have
gradients normalized
* **max_norm** (*float*) -- max norm of the gradients
* **norm_type** (*float*) -- type of the used p-norm. Can be
"'inf'" for infinity norm.
* **error_if_nonfinite** (*bool*) -- if True, an error is thrown
if the total norm of the gradients from "parameters" is "nan",
"inf", or "-inf". Default: False (will switch to True in the
future)
* **foreach** (*bool*) -- use the faster foreach-based
| https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html | pytorch docs |
implementation. If "None", use the foreach implementation for
CUDA and CPU tensors and silently fall back to the slow
implementation for other device types. Default: "None"
Returns:
Total norm of the parameter gradients (viewed as a single
vector).
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html | pytorch docs |
torch.Tensor.to_mkldnn
Tensor.to_mkldnn() -> Tensor
Returns a copy of the tensor in "torch.mkldnn" layout. | https://pytorch.org/docs/stable/generated/torch.Tensor.to_mkldnn.html | pytorch docs |
torch.Tensor.erf_
Tensor.erf_() -> Tensor
In-place version of "erf()" | https://pytorch.org/docs/stable/generated/torch.Tensor.erf_.html | pytorch docs |
torch.Tensor.bool
Tensor.bool(memory_format=torch.preserve_format) -> Tensor
"self.bool()" is equivalent to "self.to(torch.bool)". See "to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.bool.html | pytorch docs |
torch.view_as_complex
torch.view_as_complex(input) -> Tensor
Returns a view of "input" as a complex tensor. For an input complex
tensor of "size" m1, m2, \dots, mi, 2, this function returns a new
complex tensor of "size" m1, m2, \dots, mi where the last dimension
of the input tensor is expected to represent the real and imaginary
components of complex numbers.
Warning:
"view_as_complex()" is only supported for tensors with
"torch.dtype" "torch.float64" and "torch.float32". The input is
expected to have the last dimension of "size" 2. In addition, the
tensor must have a *stride* of 1 for its last dimension. The
strides of all other dimensions must be even numbers.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x=torch.randn(4, 2)
>>> x
tensor([[ 1.6116, -0.5772],
[-1.4606, -0.9120],
[ 0.0786, -1.7497],
[-0.6561, -1.6623]])
| https://pytorch.org/docs/stable/generated/torch.view_as_complex.html | pytorch docs |
[-0.6561, -1.6623]])
>>> torch.view_as_complex(x)
tensor([(1.6116-0.5772j), (-1.4606-0.9120j), (0.0786-1.7497j), (-0.6561-1.6623j)]) | https://pytorch.org/docs/stable/generated/torch.view_as_complex.html | pytorch docs |
torch.nn.functional.pixel_shuffle
torch.nn.functional.pixel_shuffle(input, upscale_factor) -> Tensor
Rearranges elements in a tensor of shape (, C \times r^2, H, W) to
a tensor of shape (, C, H \times r, W \times r), where r is the
"upscale_factor".
See "PixelShuffle" for details.
Parameters:
* input (Tensor) -- the input tensor
* **upscale_factor** (*int*) -- factor to increase spatial
resolution by
Examples:
>>> input = torch.randn(1, 9, 4, 4)
>>> output = torch.nn.functional.pixel_shuffle(input, 3)
>>> print(output.size())
torch.Size([1, 1, 12, 12])
| https://pytorch.org/docs/stable/generated/torch.nn.functional.pixel_shuffle.html | pytorch docs |
torch.Tensor.arcsinh_
Tensor.arcsinh_() -> Tensor
In-place version of "arcsinh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arcsinh_.html | pytorch docs |
torch.Tensor.less
Tensor.less()
lt(other) -> Tensor
See "torch.less()". | https://pytorch.org/docs/stable/generated/torch.Tensor.less.html | pytorch docs |
ConvBnReLU3d
class torch.ao.nn.intrinsic.qat.ConvBnReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)
A ConvBnReLU3d module is a module fused from Conv3d, BatchNorm3d
and ReLU, attached with FakeQuantize modules for weight, used in
quantization aware training.
We combined the interface of "torch.nn.Conv3d" and
"torch.nn.BatchNorm3d" and "torch.nn.ReLU".
Similar to torch.nn.Conv3d, with FakeQuantize modules initialized
to default.
Variables:
weight_fake_quant -- fake quant module for weight | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU3d.html | pytorch docs |
torch.logical_xor
torch.logical_xor(input, other, *, out=None) -> Tensor
Computes the element-wise logical XOR of the given input tensors.
Zeros are treated as "False" and nonzeros are treated as "True".
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the tensor to compute XOR with
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.logical_xor(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([False, False, True])
>>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)
>>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)
>>> torch.logical_xor(a, b)
tensor([ True, True, False, False])
>>> torch.logical_xor(a.double(), b.double())
tensor([ True, True, False, False])
>>> torch.logical_xor(a.double(), b)
tensor([ True, True, False, False])
| https://pytorch.org/docs/stable/generated/torch.logical_xor.html | pytorch docs |
tensor([ True, True, False, False])
>>> torch.logical_xor(a, b, out=torch.empty(4, dtype=torch.bool))
tensor([ True, True, False, False]) | https://pytorch.org/docs/stable/generated/torch.logical_xor.html | pytorch docs |
SobolEngine
class torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)
The "torch.quasirandom.SobolEngine" is an engine for generating
(scrambled) Sobol sequences. Sobol sequences are an example of low
discrepancy quasi-random sequences.
This implementation of an engine for Sobol sequences is capable of
sampling sequences up to a maximum dimension of 21201. It uses
direction numbers from https://web.maths.unsw.edu.au/~fkuo/sobol/
obtained using the search criterion D(6) up to the dimension 21201.
This is the recommended choice by the authors.
-[ References ]-
Art B. Owen. Scrambling Sobol and Niederreiter-Xing points.
Journal of Complexity, 14(4):466-489, December 1998.
I. M. Sobol. The distribution of points in a cube and the
accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys.,
7:784-802, 1967.
Parameters:
* dimension (Int) -- The dimensionality of the sequence to
be drawn | https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html | pytorch docs |
be drawn
* **scramble** (*bool**, **optional*) -- Setting this to "True"
will produce scrambled Sobol sequences. Scrambling is capable
of producing better Sobol sequences. Default: "False".
* **seed** (*Int**, **optional*) -- This is the seed for the
scrambling. The seed of the random number generator is set to
this, if specified. Otherwise, it uses a random seed. Default:
"None"
Examples:
>>> soboleng = torch.quasirandom.SobolEngine(dimension=5)
>>> soboleng.draw(3)
tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.7500, 0.2500, 0.2500, 0.2500, 0.7500]])
draw(n=1, out=None, dtype=torch.float32)
Function to draw a sequence of "n" points from a Sobol sequence.
Note that the samples are dependent on the previous samples. The
size of the result is (n, dimension).
Parameters:
| https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html | pytorch docs |
Parameters:
* n (Int, optional) -- The length of sequence of
points to draw. Default: 1
* **out** (*Tensor**, **optional*) -- The output tensor
* **dtype** ("torch.dtype", optional) -- the desired data
type of the returned tensor. Default: "torch.float32"
Return type:
*Tensor*
draw_base2(m, out=None, dtype=torch.float32)
Function to draw a sequence of "2**m" points from a Sobol
sequence. Note that the samples are dependent on the previous
samples. The size of the result is (2**m, dimension).
Parameters:
* **m** (*Int*) -- The (base2) exponent of the number of
points to draw.
* **out** (*Tensor**, **optional*) -- The output tensor
* **dtype** ("torch.dtype", optional) -- the desired data
type of the returned tensor. Default: "torch.float32"
Return type:
*Tensor*
fast_forward(n) | https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html | pytorch docs |
Tensor
fast_forward(n)
Function to fast-forward the state of the "SobolEngine" by "n"
steps. This is equivalent to drawing "n" samples without using
the samples.
Parameters:
**n** (*Int*) -- The number of steps to fast-forward by.
reset()
Function to reset the "SobolEngine" to base state.
| https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html | pytorch docs |
PackedSequence
class torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)
Holds the data and list of "batch_sizes" of a packed sequence.
All RNN modules accept packed sequences as inputs.
Note:
Instances of this class should never be created manually. They
are meant to be instantiated by functions like
"pack_padded_sequence()".Batch sizes represent the number
elements at each sequence step in the batch, not the varying
sequence lengths passed to "pack_padded_sequence()". For
instance, given data "abc" and "x" the "PackedSequence" would
contain data "axbc" with "batch_sizes=[2,1,1]".
Variables:
* data (Tensor) -- Tensor containing packed sequence
* **batch_sizes** (*Tensor*) -- Tensor of integers holding
information about the batch size at each sequence step
* **sorted_indices** (*Tensor**, **optional*) -- Tensor of
| https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html | pytorch docs |
integers holding how this "PackedSequence" is constructed from
sequences.
* **unsorted_indices** (*Tensor**, **optional*) -- Tensor of
integers holding how this to recover the original sequences
with correct order.
Note:
"data" can be on arbitrary device and of arbitrary dtype.
"sorted_indices" and "unsorted_indices" must be "torch.int64"
tensors on the same device as "data".However, "batch_sizes"
should always be a CPU "torch.int64" tensor.This invariant is
maintained throughout "PackedSequence" class, and all functions
that construct a *:class:PackedSequence* in PyTorch (i.e., they
only pass in tensors conforming to this constraint).
batch_sizes: Tensor
Alias for field number 1
count(value, /)
Return number of occurrences of value.
data: Tensor
Alias for field number 0
index(value, start=0, stop=9223372036854775807, /)
Return first index of value.
| https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html | pytorch docs |
Return first index of value.
Raises ValueError if the value is not present.
property is_cuda
Returns true if *self.data* stored on a gpu
is_pinned()
Returns true if *self.data* stored on in pinned memory
sorted_indices: Optional[Tensor]
Alias for field number 2
to(args, *kwargs)
Performs dtype and/or device conversion on *self.data*.
It has similar signature as "torch.Tensor.to()", except optional
arguments like *non_blocking* and *copy* should be passed as
kwargs, not args, or they will not apply to the index tensors.
Note:
If the "self.data" Tensor already has the correct
"torch.dtype" and "torch.device", then "self" is returned.
Otherwise, returns a copy with the desired configuration.
unsorted_indices: Optional[Tensor]
Alias for field number 3
| https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html | pytorch docs |
Softplus
class torch.nn.Softplus(beta=1, threshold=20)
Applies the Softplus function \text{Softplus}(x) = \frac{1}{\beta}
* \log(1 + \exp(\beta * x)) element-wise.
SoftPlus is a smooth approximation to the ReLU function and can be
used to constrain the output of a machine to always be positive.
For numerical stability the implementation reverts to the linear
function when input \times \beta > threshold.
Parameters:
* beta (int) -- the \beta value for the Softplus
formulation. Default: 1
* **threshold** (*int*) -- values above this revert to a linear
function. Default: 20
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Softplus()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Softplus.html | pytorch docs |
torch.logical_or
torch.logical_or(input, other, *, out=None) -> Tensor
Computes the element-wise logical OR of the given input tensors.
Zeros are treated as "False" and nonzeros are treated as "True".
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the tensor to compute OR with
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.logical_or(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([ True, False, True])
>>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)
>>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)
>>> torch.logical_or(a, b)
tensor([ True, True, True, False])
>>> torch.logical_or(a.double(), b.double())
tensor([ True, True, True, False])
>>> torch.logical_or(a.double(), b)
tensor([ True, True, True, False])
| https://pytorch.org/docs/stable/generated/torch.logical_or.html | pytorch docs |
tensor([ True, True, True, False])
>>> torch.logical_or(a, b, out=torch.empty(4, dtype=torch.bool))
tensor([ True, True, True, False]) | https://pytorch.org/docs/stable/generated/torch.logical_or.html | pytorch docs |
threshold
class torch.ao.nn.quantized.functional.threshold(input, threshold, value)
Applies the quantized version of the threshold function element-
wise:
x = \begin{cases} x & \text{if~} x > \text{threshold} \\
\text{value} & \text{otherwise} \end{cases}
See "Threshold" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.threshold.html | pytorch docs |
torch.Tensor.diagonal
Tensor.diagonal(offset=0, dim1=0, dim2=1) -> Tensor
See "torch.diagonal()" | https://pytorch.org/docs/stable/generated/torch.Tensor.diagonal.html | pytorch docs |
MarginRankingLoss
class torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')
Creates a criterion that measures the loss given inputs x1, x2, two
1D mini-batch or 0D Tensors, and a label 1D mini-batch or 0D
Tensor y (containing 1 or -1).
If y = 1 then it assumed the first input should be ranked higher
(have a larger value) than the second input, and vice-versa for y =
-1.
The loss function for each pair of samples in the mini-batch is:
\text{loss}(x1, x2, y) = \max(0, -y * (x1 - x2) + \text{margin})
Parameters:
* margin (float, optional) -- Has a default value of
0.
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
| https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html | pytorch docs |
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape: | https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html | pytorch docs |
Shape:
* Input1: (N) or () where N is the batch size.
* Input2: (N) or (), same shape as the Input1.
* Target: (N) or (), same shape as the inputs.
* Output: scalar. If "reduction" is "'none'" and Input size is
not (), then (N).
Examples:
>>> loss = nn.MarginRankingLoss()
>>> input1 = torch.randn(3, requires_grad=True)
>>> input2 = torch.randn(3, requires_grad=True)
>>> target = torch.randn(3).sign()
>>> output = loss(input1, input2, target)
>>> output.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html | pytorch docs |
torch.Tensor.cumprod
Tensor.cumprod(dim, dtype=None) -> Tensor
See "torch.cumprod()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cumprod.html | pytorch docs |
LocalResponseNorm
class torch.nn.LocalResponseNorm(size, alpha=0.0001, beta=0.75, k=1.0)
Applies local response normalization over an input signal composed
of several input planes, where channels occupy the second
dimension. Applies normalization across channels.
b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0,
c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}
Parameters:
* size (int) -- amount of neighbouring channels used for
normalization
* **alpha** (*float*) -- multiplicative factor. Default: 0.0001
* **beta** (*float*) -- exponent. Default: 0.75
* **k** (*float*) -- additive factor. Default: 1
Shape:
* Input: (N, C, *)
* Output: (N, C, *) (same shape as input)
Examples:
>>> lrn = nn.LocalResponseNorm(2)
>>> signal_2d = torch.randn(32, 5, 24, 24)
>>> signal_4d = torch.randn(16, 5, 7, 7, 7, 7)
>>> output_2d = lrn(signal_2d)
>>> output_4d = lrn(signal_4d)
| https://pytorch.org/docs/stable/generated/torch.nn.LocalResponseNorm.html | pytorch docs |
torch.jit.trace_module
torch.jit.trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=, example_inputs_is_kwarg=False)
Trace a module and return an executable "ScriptModule" that will be
optimized using just-in-time compilation. When a module is passed
to "torch.jit.trace", only the "forward" method is run and traced.
With "trace_module", you can specify a dictionary of method names
to example inputs to trace (see the "inputs") argument below.
See "torch.jit.trace" for more information on tracing.
Parameters:
* mod (torch.nn.Module) -- A "torch.nn.Module" containing
methods whose names are specified in "inputs". The given
methods will be compiled as a part of a single ScriptModule.
* **inputs** (*dict*) -- A dict containing sample inputs indexed
| https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html | pytorch docs |
by method names in "mod". The inputs will be passed to methods
whose names correspond to inputs' keys while tracing. "{
'forward' : example_forward_input, 'method2':
example_method2_input}"
Keyword Arguments:
* check_trace ("bool", optional) -- Check if the same inputs
run through traced code produce the same outputs. Default:
"True". You might want to disable this if, for example, your
network contains non- deterministic ops or if you are sure
that the network is correct despite a checker failure.
* **check_inputs** (*list of dicts**, **optional*) -- A list of
dicts of input arguments that should be used to check the
trace against what is expected. Each tuple is equivalent to a
set of input arguments that would be specified in "inputs".
For best results, pass in a set of checking inputs
representative of the space of shapes and types of inputs you
| https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html | pytorch docs |
expect the network to see. If not specified, the original
"inputs" are used for checking
* **check_tolerance** (*float**, **optional*) -- Floating-point
comparison tolerance to use in the checker procedure. This can
be used to relax the checker strictness in the event that
results diverge numerically for a known reason, such as
operator fusion.
* **example_inputs_is_kwarg** ("bool", optional) -- This
parameter indicate whether the example inputs is a pack pack
of keyword arguments. Default: "False".
Returns:
A "ScriptModule" object with a single "forward" method
containing the traced code. When "func" is a "torch.nn.Module",
the returned "ScriptModule" will have the same set of sub-
modules and parameters as "func".
Example (tracing a module with multiple methods):
import torch
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
| https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html | pytorch docs |
def init(self):
super(Net, self).init()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x)
def weighted_kernel_sum(self, weight):
return weight * self.conv.weight
n = Net()
example_weight = torch.rand(1, 1, 3, 3)
example_forward_input = torch.rand(1, 1, 3, 3)
# Trace a specific method and construct `ScriptModule` with
# a single `forward` method
module = torch.jit.trace(n.forward, example_forward_input)
# Trace a module (implicitly traces `forward`) and construct a
# `ScriptModule` with a single `forward` method
module = torch.jit.trace(n, example_forward_input)
# Trace specific methods on a module (specified in `inputs`), constructs
# a `ScriptModule` with `forward` and `weighted_kernel_sum` methods
inputs = {'forward' : example_forward_input, 'weighted_kernel_sum' : example_weight}
| https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html | pytorch docs |
module = torch.jit.trace_module(n, inputs) | https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html | pytorch docs |
ReplicationPad2d
class torch.nn.ReplicationPad2d(padding)
Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in all boundaries. If a 4-tuple,
uses (\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom})
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where
H_{out} = H_{in} + \text{padding\_top} +
\text{padding\_bottom}
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ReplicationPad2d(2)
>>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)
>>> input
tensor([[[[0., 1., 2.],
[3., 4., 5.],
[6., 7., 8.]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad2d.html | pytorch docs |
[6., 7., 8.]]]])
>>> m(input)
tensor([[[[0., 0., 0., 1., 2., 2., 2.],
[0., 0., 0., 1., 2., 2., 2.],
[0., 0., 0., 1., 2., 2., 2.],
[3., 3., 3., 4., 5., 5., 5.],
[6., 6., 6., 7., 8., 8., 8.],
[6., 6., 6., 7., 8., 8., 8.],
[6., 6., 6., 7., 8., 8., 8.]]]])
>>> # using different paddings for different sides
>>> m = nn.ReplicationPad2d((1, 1, 2, 0))
>>> m(input)
tensor([[[[0., 0., 1., 2., 2.],
[0., 0., 1., 2., 2.],
[0., 0., 1., 2., 2.],
[3., 3., 4., 5., 5.],
[6., 6., 7., 8., 8.]]]]) | https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad2d.html | pytorch docs |
torch.Tensor.to_dense
Tensor.to_dense() -> Tensor
Creates a strided copy of "self" if "self" is not a strided tensor,
otherwise returns "self".
Example:
>>> s = torch.sparse_coo_tensor(
... torch.tensor([[1, 1],
... [0, 2]]),
... torch.tensor([9, 10]),
... size=(3, 3))
>>> s.to_dense()
tensor([[ 0, 0, 0],
[ 9, 0, 10],
[ 0, 0, 0]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_dense.html | pytorch docs |
Dropout
class torch.nn.Dropout(p=0.5, inplace=False)
During training, randomly zeroes some of the elements of the input
tensor with probability "p" using samples from a Bernoulli
distribution. Each channel will be zeroed out independently on
every forward call.
This has proven to be an effective technique for regularization and
preventing the co-adaptation of neurons as described in the paper
Improving neural networks by preventing co-adaptation of feature
detectors .
Furthermore, the outputs are scaled by a factor of \frac{1}{1-p}
during training. This means that during evaluation the module
simply computes an identity function.
Parameters:
* p (float) -- probability of an element to be zeroed.
Default: 0.5
* **inplace** (*bool*) -- If set to "True", will do this
operation in-place. Default: "False"
Shape:
* Input: (*). Input can be of any shape
* Output: (*). Output is of the same shape as input
| https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html | pytorch docs |
Examples:
>>> m = nn.Dropout(p=0.2)
>>> input = torch.randn(20, 16)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html | pytorch docs |
avg_pool2d
class torch.ao.nn.quantized.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)
Applies 2D average-pooling operation in kH \times kW regions by
step size sH \times sW steps. The number of output features is
equal to the number of input planes.
Note:
The input quantization parameters propagate to the output.
See "AvgPool2d" for details and output shape.
Parameters:
* input -- quantized input tensor (\text{minibatch} ,
\text{in_channels} , iH , iW)
* **kernel_size** -- size of the pooling region. Can be a single
number or a tuple *(kH, kW)*
* **stride** -- stride of the pooling operation. Can be a single
number or a tuple *(sH, sW)*. Default: "kernel_size"
* **padding** -- implicit zero paddings on both sides of the
input. Can be a single number or a tuple *(padH, padW)*.
Default: 0
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool2d.html | pytorch docs |
Default: 0
* **ceil_mode** -- when True, will use *ceil* instead of *floor*
in the formula to compute the output shape. Default: "False"
* **count_include_pad** -- when True, will include the zero-
padding in the averaging calculation. Default: "True"
* **divisor_override** -- if specified, it will be used as
divisor, otherwise size of the pooling region will be used.
Default: None
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool2d.html | pytorch docs |
torch.Tensor.ne
Tensor.ne(other) -> Tensor
See "torch.ne()". | https://pytorch.org/docs/stable/generated/torch.Tensor.ne.html | pytorch docs |
torch.foreach_asin
torch.foreach_asin(self: List[Tensor]) -> None
Apply "torch.asin()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_asin_.html | pytorch docs |
Linear
class torch.ao.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)
A dynamic quantized linear module with floating point tensor as
inputs and outputs. We adopt the same interface as
torch.nn.Linear, please see
https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for
documentation.
Similar to "torch.nn.Linear", attributes will be randomly
initialized at module creation time and will be overwritten later
Variables:
* weight (Tensor) -- the non-learnable quantized weights
of the module which are of shape (\text{out_features},
\text{in_features}).
* **bias** (*Tensor*) -- the non-learnable floating point bias
of the module of shape (\text{out\_features}). If "bias" is
"True", the values are initialized to zero.
Examples:
>>> m = nn.quantized.dynamic.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.Linear.html | pytorch docs |
print(output.size())
torch.Size([128, 30])
classmethod from_float(mod)
Create a dynamic quantized module from a float module or
qparams_dict
Parameters:
**mod** (*Module*) -- a float module, either produced by
torch.ao.quantization utilities or provided by the user
classmethod from_reference(ref_qlinear)
Create a (fbgemm/qnnpack) dynamic quantized module from a
reference quantized module :param ref_qlinear: a reference
quantized module, either produced by :type ref_qlinear: Module
:param torch.ao.quantization functions or provided by the user:
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.Linear.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.