text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.Tensor.topk
Tensor.topk(k, dim=None, largest=True, sorted=True)
See "torch.topk()" | https://pytorch.org/docs/stable/generated/torch.Tensor.topk.html | pytorch docs |
torch.nn.utils.prune.remove
torch.nn.utils.prune.remove(module, name)
Removes the pruning reparameterization from a module and the
pruning method from the forward hook. The pruned parameter named
"name" remains permanently pruned, and the parameter named
"name+'_orig'" is removed from the parameter list. Similarly, the
buffer named "name+'_mask'" is removed from the buffers.
Note:
Pruning itself is NOT undone or reversed!
Parameters:
* module (nn.Module) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
-[ Examples ]-
m = random_unstructured(nn.Linear(5, 7), name='weight', amount=0.2)
m = remove(m, name='weight')
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.remove.html | pytorch docs |
torch.Tensor.q_per_channel_scales
Tensor.q_per_channel_scales() -> Tensor
Given a Tensor quantized by linear (affine) per-channel
quantization, returns a Tensor of scales of the underlying
quantizer. It has the number of elements that matches the
corresponding dimensions (from q_per_channel_axis) of the tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_scales.html | pytorch docs |
torch.take_along_dim
torch.take_along_dim(input, indices, dim, *, out=None) -> Tensor
Selects values from "input" at the 1-dimensional indices from
"indices" along the given "dim".
Functions that return indices along a dimension, like
"torch.argmax()" and "torch.argsort()", are designed to work with
this function. See the examples below.
Note:
This function is similar to NumPy's *take_along_axis*. See also
"torch.gather()".
Parameters:
* input (Tensor) -- the input tensor.
* **indices** (*tensor*) -- the indices into "input". Must have
long dtype.
* **dim** (*int*) -- dimension to select along.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> t = torch.tensor([[10, 30, 20], [60, 40, 50]])
>>> max_idx = torch.argmax(t)
>>> torch.take_along_dim(t, max_idx)
tensor([60])
>>> sorted_idx = torch.argsort(t, dim=1)
| https://pytorch.org/docs/stable/generated/torch.take_along_dim.html | pytorch docs |
sorted_idx = torch.argsort(t, dim=1)
>>> torch.take_along_dim(t, sorted_idx, dim=1)
tensor([[10, 20, 30],
[40, 50, 60]])
| https://pytorch.org/docs/stable/generated/torch.take_along_dim.html | pytorch docs |
torch.get_rng_state
torch.get_rng_state()
Returns the random number generator state as a torch.ByteTensor.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.get_rng_state.html | pytorch docs |
torch.nn.modules.module.register_module_forward_hook
torch.nn.modules.module.register_module_forward_hook(hook)
Registers a global forward hook for all the modules
Warning:
This adds global state to the *nn.module* module and it is only
intended for debugging/profiling purposes.
The hook will be called every time after "forward()" has computed
an output. It should have the following signature:
hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the
module. Keyword arguments won't be passed to the hooks and only to
the "forward". The hook can modify the output. It can modify the
input inplace but it will not have effect on forward since this is
called after "forward()" is called.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle" | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html | pytorch docs |
"torch.utils.hooks.RemovableHandle"
This hook will be executed before specific module hooks registered
with "register_forward_hook". | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html | pytorch docs |
torch.lu
torch.lu(args, *kwargs)
Computes the LU factorization of a matrix or batches of matrices
"A". Returns a tuple containing the LU factorization and pivots of
"A". Pivoting is done if "pivot" is set to "True".
Warning:
"torch.lu()" is deprecated in favor of "torch.linalg.lu_factor()"
and "torch.linalg.lu_factor_ex()". "torch.lu()" will be removed
in a future PyTorch release. "LU, pivots, info = torch.lu(A,
compute_pivots)" should be replaced with
LU, pivots = torch.linalg.lu_factor(A, compute_pivots)
"LU, pivots, info = torch.lu(A, compute_pivots, get_infos=True)"
should be replaced with
LU, pivots, info = torch.linalg.lu_factor_ex(A, compute_pivots)
Note:
* The returned permutation matrix for every matrix in the batch
is represented by a 1-indexed vector of size "min(A.shape[-2],
A.shape[-1])". "pivots[i] == j" represents that in the "i"-th
| https://pytorch.org/docs/stable/generated/torch.lu.html | pytorch docs |
step of the algorithm, the "i"-th row was permuted with the
"j-1"-th row.
* LU factorization with "pivot" = "False" is not available for
CPU, and attempting to do so will throw an error. However, LU
factorization with "pivot" = "False" is available for CUDA.
* This function does not check if the factorization was
successful or not if "get_infos" is "True" since the status of
the factorization is present in the third element of the return
tuple.
* In the case of batches of square matrices with size less or
equal to 32 on a CUDA device, the LU factorization is repeated
for singular matrices due to the bug in the MAGMA library (see
magma issue 13).
* "L", "U", and "P" can be derived using "torch.lu_unpack()".
Warning:
The gradients of this function will only be finite when "A" is
full rank. This is because the LU decomposition is just
differentiable at full rank matrices. Furthermore, if "A" is
| https://pytorch.org/docs/stable/generated/torch.lu.html | pytorch docs |
close to not being full rank, the gradient will be numerically
unstable as it depends on the computation of L^{-1} and U^{-1}.
Parameters:
* A (Tensor) -- the tensor to factor of size (*, m, n)
* **pivot** (*bool**, **optional*) -- controls whether pivoting
is done. Default: "True"
* **get_infos** (*bool**, **optional*) -- if set to "True",
returns an info IntTensor. Default: "False"
* **out** (*tuple**, **optional*) -- optional output tuple. If
"get_infos" is "True", then the elements in the tuple are
Tensor, IntTensor, and IntTensor. If "get_infos" is "False",
then the elements in the tuple are Tensor, IntTensor. Default:
"None"
Returns:
A tuple of tensors containing
* **factorization** (*Tensor*): the factorization of size (*,
m, n)
* **pivots** (*IntTensor*): the pivots of size (*,
\text{min}(m, n)). "pivots" stores all the intermediate
| https://pytorch.org/docs/stable/generated/torch.lu.html | pytorch docs |
transpositions of rows. The final permutation "perm" could
be reconstructed by applying "swap(perm[i], perm[pivots[i]
- 1])" for "i = 0, ..., pivots.size(-1) - 1", where "perm"
is initially the identity permutation of m elements
(essentially this is what "torch.lu_unpack()" is doing).
* **infos** (*IntTensor*, *optional*): if "get_infos" is
"True", this is a tensor of size (*) where non-zero values
indicate whether factorization for the matrix or each
minibatch has succeeded or failed
Return type:
(Tensor, IntTensor, IntTensor (optional))
Example:
>>> A = torch.randn(2, 3, 3)
>>> A_LU, pivots = torch.lu(A)
>>> A_LU
tensor([[[ 1.3506, 2.5558, -0.0816],
[ 0.1684, 1.1551, 0.1940],
[ 0.1193, 0.6189, -0.5497]],
[[ 0.4526, 1.2526, -0.3285],
[-0.7988, 0.7175, -0.9701],
[ 0.2634, -0.9255, -0.3459]]])
| https://pytorch.org/docs/stable/generated/torch.lu.html | pytorch docs |
[ 0.2634, -0.9255, -0.3459]]])
>>> pivots
tensor([[ 3, 3, 3],
[ 3, 3, 3]], dtype=torch.int32)
>>> A_LU, pivots, info = torch.lu(A, get_infos=True)
>>> if info.nonzero().size(0) == 0:
... print('LU factorization succeeded for all samples!')
LU factorization succeeded for all samples! | https://pytorch.org/docs/stable/generated/torch.lu.html | pytorch docs |
torch.Tensor.addbmm_
Tensor.addbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor
In-place version of "addbmm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.addbmm_.html | pytorch docs |
torch.absolute
torch.absolute(input, *, out=None) -> Tensor
Alias for "torch.abs()" | https://pytorch.org/docs/stable/generated/torch.absolute.html | pytorch docs |
torch.Tensor.requires_grad
Tensor.requires_grad
Is "True" if gradients need to be computed for this Tensor, "False"
otherwise.
Note:
The fact that gradients need to be computed for a Tensor do not
mean that the "grad" attribute will be populated, see "is_leaf"
for more details.
| https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad.html | pytorch docs |
torch.trunc
torch.trunc(input, *, out=None) -> Tensor
Returns a new tensor with the truncated integer values of the
elements of "input".
For integer inputs, follows the array-api convention of returning a
copy of the input tensor.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 3.4742, 0.5466, -0.8008, -0.9079])
>>> torch.trunc(a)
tensor([ 3., 0., -0., -0.])
| https://pytorch.org/docs/stable/generated/torch.trunc.html | pytorch docs |
torch.linalg.cholesky
torch.linalg.cholesky(A, *, upper=False, out=None) -> Tensor
Computes the Cholesky decomposition of a complex Hermitian or real
symmetric positive-definite matrix.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the Cholesky
decomposition of a complex Hermitian or real symmetric positive-
definite matrix A \in \mathbb{K}^{n \times n} is defined as
A = LL^{\text{H}}\mathrlap{\qquad L \in \mathbb{K}^{n \times n}}
where L is a lower triangular matrix with real positive diagonal
(even in the complex case) and L^{\text{H}} is the conjugate
transpose when L is complex, and the transpose when L is real-
valued.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
Note:
When inputs are on a CUDA device, this function synchronizes that
device with the CPU.
See also: | https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html | pytorch docs |
device with the CPU.
See also:
"torch.linalg.cholesky_ex()" for a version of this operation that
skips the (slow) error checking by default and instead returns
the debug information. This makes it a faster way to check if a
matrix is positive-definite.
"torch.linalg.eigh()" for a different decomposition of a
Hermitian matrix. The eigenvalue decomposition gives more
information about the matrix but it slower to compute than the
Cholesky decomposition.
Parameters:
A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions consisting of symmetric or
Hermitian positive-definite matrices.
Keyword Arguments:
* upper (bool, optional) -- whether to return an upper
triangular matrix. The tensor returned with upper=True is the
conjugate transpose of the tensor returned with upper=False.
* **out** (*Tensor**, **optional*) -- output tensor. Ignored if
| https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html | pytorch docs |
None. Default: None.
Raises:
RuntimeError -- if the "A" matrix or any matrix in a batched
"A" is not Hermitian (resp. symmetric) positive-definite. If
"A" is a batch of matrices, the error message will include
the batch index of the first matrix that fails to meet this
condition.
Examples:
>>> A = torch.randn(2, 2, dtype=torch.complex128)
>>> A = A @ A.T.conj() + torch.eye(2) # creates a Hermitian positive-definite matrix
>>> A
tensor([[2.5266+0.0000j, 1.9586-2.0626j],
[1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)
>>> L = torch.linalg.cholesky(A)
>>> L
tensor([[1.5895+0.0000j, 0.0000+0.0000j],
[1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128)
>>> torch.dist(L @ L.T.conj(), A)
tensor(4.4692e-16, dtype=torch.float64)
>>> A = torch.randn(3, 2, 2, dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html | pytorch docs |
A = A @ A.mT + torch.eye(2) # batch of symmetric positive-definite matrices
>>> L = torch.linalg.cholesky(A)
>>> torch.dist(L @ L.mT, A)
tensor(5.8747e-16, dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html | pytorch docs |
torch.nn.functional.conv_transpose3d
torch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor
Applies a 3D transposed convolution operator over an input image
composed of several input planes, sometimes also called
"deconvolution"
This operator supports TensorFloat32.
See "ConvTranspose3d" for details and output shape.
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Parameters:
* input -- input tensor of shape (\text{minibatch} ,
\text{in_channels} , iT , iH , iW) | https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html | pytorch docs |
\text{in_channels} , iT , iH , iW)
* **weight** -- filters of shape (\text{in\_channels} ,
\frac{\text{out\_channels}}{\text{groups}} , kT , kH , kW)
* **bias** -- optional bias of shape (\text{out\_channels}).
Default: None
* **stride** -- the stride of the convolving kernel. Can be a
single number or a tuple "(sT, sH, sW)". Default: 1
* **padding** -- "dilation * (kernel_size - 1) - padding" zero-
padding will be added to both sides of each dimension in the
input. Can be a single number or a tuple "(padT, padH, padW)".
Default: 0
* **output_padding** -- additional size added to one side of
each dimension in the output shape. Can be a single number or
a tuple "(out_padT, out_padH, out_padW)". Default: 0
* **groups** -- split input into groups, \text{in\_channels}
should be divisible by the number of groups. Default: 1
* **dilation** -- the spacing between kernel elements. Can be a
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html | pytorch docs |
single number or a tuple (dT, dH, dW). Default: 1
Examples:
>>> inputs = torch.randn(20, 16, 50, 10, 20)
>>> weights = torch.randn(16, 33, 3, 3, 3)
>>> F.conv_transpose3d(inputs, weights)
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html | pytorch docs |
torch.cuda.memory_usage
torch.cuda.memory_usage(device=None)
Returns the percent of time over the past sample period during
which global (device) memory was being read or written. as given by
nvidia-smi.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
int
Warning: Each sample period may be between 1 second and 1/6 second,
depending on the product being queried. | https://pytorch.org/docs/stable/generated/torch.cuda.memory_usage.html | pytorch docs |
ConvTranspose1d
class torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
Applies a 1D transposed convolution operator over an input image
composed of several input planes. For details on input arguments,
parameters, and implementation see "ConvTranspose1d".
Note:
Currently only the QNNPACK engine is implemented. Please, set the
*torch.backends.quantized.engine = 'qnnpack'*
For special notes, please, see "Conv1d"
Variables:
* weight (Tensor) -- packed tensor derived from the
learnable weight parameter.
* **scale** (*Tensor*) -- scalar for the output scale
* **zero_point** (*Tensor*) -- scalar for the output zero point
See "ConvTranspose2d" for other attributes.
Examples:
>>> torch.backends.quantized.engine = 'qnnpack'
>>> from torch.nn import quantized as nnq
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose1d.html | pytorch docs |
from torch.nn import quantized as nnq
>>> # With square kernels and equal stride
>>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12])
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose1d.html | pytorch docs |
torch.linalg.solve_triangular
torch.linalg.solve_triangular(A, B, *, upper, left=True, unitriangular=False, out=None) -> Tensor
Computes the solution of a triangular system of linear equations
with a unique solution.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, this function
computes the solution X \in \mathbb{K}^{n \times k} of the linear
system associated to the triangular matrix A \in \mathbb{K}^{n
\times n} without zeros on the diagonal (that is, it is invertible)
and the rectangular matrix , B \in \mathbb{K}^{n \times k}, which
is defined as
AX = B
The argument "upper" signals whether A is upper or lower
triangular.
If "left"= False, this function returns the matrix X \in
\mathbb{K}^{n \times k} that solves the system
XA = B\mathrlap{\qquad A \in \mathbb{K}^{k \times k}, B \in
\mathbb{K}^{n \times k}.}
If "upper"= True (resp. False) just the upper (resp. lower) | https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html | pytorch docs |
triangular half of "A" will be accessed. The elements below the
main diagonal will be considered to be zero and will not be
accessed.
If "unitriangular"= True, the diagonal of "A" is assumed to be
ones and will not be accessed.
The result may contain NaN s if the diagonal of "A" contains
zeros or elements that are very close to zero and "unitriangular"=
False (default) or if the input matrix has very small eigenvalues.
Supports inputs of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if the inputs are batches of
matrices then the output has the same batch dimensions.
See also:
"torch.linalg.solve()" computes the solution of a general square
system of linear equations with a unique solution.
Parameters:
* A (Tensor) -- tensor of shape (, n, n) (or (, k,
k) if "left"= True) where *** is zero or more batch
dimensions. | https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html | pytorch docs |
dimensions.
* **B** (*Tensor*) -- right-hand side tensor of shape *(*, n,
k)*.
Keyword Arguments:
* upper (bool) -- whether "A" is an upper or lower
triangular matrix.
* **left** (*bool**, **optional*) -- whether to solve the system
AX=B or XA = B. Default: *True*.
* **unitriangular** (*bool**, **optional*) -- if *True*, the
diagonal elements of "A" are assumed to be all equal to *1*.
Default: *False*.
* **out** (*Tensor**, **optional*) -- output tensor. *B* may be
passed as *out* and the result is computed in-place on *B*.
Ignored if *None*. Default: *None*.
Examples:
>>> A = torch.randn(3, 3).triu_()
>>> b = torch.randn(3, 4)
>>> X = torch.linalg.solve_triangular(A, B, upper=True)
>>> torch.allclose(A @ X, B)
True
>>> A = torch.randn(2, 3, 3).tril_()
>>> B = torch.randn(2, 3, 4)
>>> X = torch.linalg.solve_triangular(A, B, upper=False)
| https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html | pytorch docs |
torch.allclose(A @ X, B)
True
>>> A = torch.randn(2, 4, 4).tril_()
>>> B = torch.randn(2, 3, 4)
>>> X = torch.linalg.solve_triangular(A, B, upper=False, left=False)
>>> torch.allclose(X @ A, B)
True
| https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html | pytorch docs |
torch.cov
torch.cov(input, *, correction=1, fweights=None, aweights=None) -> Tensor
Estimates the covariance matrix of the variables given by the
"input" matrix, where rows are the variables and columns are the
observations.
A covariance matrix is a square matrix giving the covariance of
each pair of variables. The diagonal contains the variance of each
variable (covariance of a variable with itself). By definition, if
"input" represents a single variable (Scalar or 1D) then its
variance is returned.
The unbiased sample covariance of the variables x and y is given
by:
\text{cov}_w(x,y) = \frac{\sum^{N}_{i = 1}(x_{i} -
\bar{x})(y_{i} - \bar{y})}{N~-~1}
where \bar{x} and \bar{y} are the simple means of the x and y
respectively.
If "fweights" and/or "aweights" are provided, the unbiased weighted
covariance is calculated, which is given by:
\text{cov}_w(x,y) = \frac{\sum^{N}_{i = 1}w_i(x_{i} -
| https://pytorch.org/docs/stable/generated/torch.cov.html | pytorch docs |
\mu_x^)(y_{i} - \mu_y^)}{\sum^{N}_{i = 1}w_i~-~1}
where w denotes "fweights" or "aweights" based on whichever is
provided, or w = fweights \times aweights if both are provided, and
\mu_x^* = \frac{\sum^{N}{i = 1}w_ix }{\sum^{N}_{i = 1}w_i} is
the weighted mean of the variable.
Parameters:
input (Tensor) -- A 2D matrix containing multiple
variables and observations, or a Scalar or 1D vector
representing a single variable.
Keyword Arguments:
* correction (int, optional) -- difference between the
sample size and sample degrees of freedom. Defaults to
Bessel's correction, "correction = 1" which returns the
unbiased estimate, even if both "fweights" and "aweights" are
specified. "correction = 0" will return the simple average.
Defaults to "1".
* **fweights** (*tensor**, **optional*) -- A Scalar or 1D tensor
of observation vector frequencies representing the number of
| https://pytorch.org/docs/stable/generated/torch.cov.html | pytorch docs |
times each observation should be repeated. Its numel must
equal the number of columns of "input". Must have integral
dtype. Ignored if "None". Defaults to `None.
* **aweights** (*tensor**, **optional*) -- A Scalar or 1D array
of observation vector weights. These relative weights are
typically large for observations considered âimportantâ and
smaller for observations considered less âimportantâ. Its
numel must equal the number of columns of "input". Must have
floating point dtype. Ignored if "None". *Defaults to
``None`*.
Returns:
(Tensor) The covariance matrix of the variables.
See also: "torch.corrcoef()" normalized covariance matrix.
Example::
>>> x = torch.tensor([[0, 2], [1, 1], [2, 0]]).T
>>> x
tensor([[0, 1, 2],
[2, 1, 0]])
>>> torch.cov(x)
tensor([[ 1., -1.],
[-1., 1.]])
>>> torch.cov(x, correction=0) | https://pytorch.org/docs/stable/generated/torch.cov.html | pytorch docs |
torch.cov(x, correction=0)
tensor([[ 0.6667, -0.6667],
[-0.6667, 0.6667]])
>>> fw = torch.randint(1, 10, (3,))
>>> fw
tensor([1, 6, 9])
>>> aw = torch.rand(3)
>>> aw
tensor([0.4282, 0.0255, 0.4144])
>>> torch.cov(x, fweights=fw, aweights=aw)
tensor([[ 0.4169, -0.4169],
[-0.4169, 0.4169]])
| https://pytorch.org/docs/stable/generated/torch.cov.html | pytorch docs |
torch.asarray
torch.asarray(obj, *, dtype=None, device=None, copy=None, requires_grad=False) -> Tensor
Converts "obj" to a tensor.
"obj" can be one of:
a tensor
a NumPy array
a DLPack capsule
an object that implements Python's buffer protocol
a scalar
a sequence of scalars
When "obj" is a tensor, NumPy array, or DLPack capsule the returned
tensor will, by default, not require a gradient, have the same
datatype as "obj", be on the same device, and share memory with it.
These properties can be controlled with the "dtype", "device",
"copy", and "requires_grad" keyword arguments. If the returned
tensor is of a different datatype, on a different device, or a copy
is requested then it will not share its memory with "obj". If
"requires_grad" is "True" then the returned tensor will require a
gradient, and if "obj" is also a tensor with an autograd history
then the returned tensor will have the same history. | https://pytorch.org/docs/stable/generated/torch.asarray.html | pytorch docs |
When "obj" is not a tensor, NumPy Array, or DLPack capsule but
implements Python's buffer protocol then the buffer is interpreted
as an array of bytes grouped according to the size of the datatype
passed to the "dtype" keyword argument. (If no datatype is passed
then the default floating point datatype is used, instead.) The
returned tensor will have the specified datatype (or default
floating point datatype if none is specified) and, by default, be
on the CPU device and share memory with the buffer.
When "obj" is none of the above but a scalar or sequence of scalars
then the returned tensor will, by default, infer its datatype from
the scalar values, be on the CPU device, and not share its memory.
See also:
"torch.tensor()" creates a tensor that always copies the data
from the input object. "torch.from_numpy()" creates a tensor that
always shares memory from NumPy arrays. "torch.frombuffer()"
| https://pytorch.org/docs/stable/generated/torch.asarray.html | pytorch docs |
creates a tensor that always shares memory from objects that
implement the buffer protocol. "torch.from_dlpack()" creates a
tensor that always shares memory from DLPack capsules.
Parameters:
obj (object) -- a tensor, NumPy array, DLPack Capsule,
object that implements Python's buffer protocol, scalar, or
sequence of scalars.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the datatype of the
returned tensor. Default: "None", which causes the datatype of
the returned tensor to be inferred from "obj".
* **copy** (*bool**, **optional*) -- controls whether the
returned tensor shares memory with "obj". Default: "None",
which causes the returned tensor to share memory with "obj"
whenever possible. If "True" then the returned tensor does not
share its memory. If "False" then the returned tensor shares
its memory with "obj" and an error is thrown if it cannot.
| https://pytorch.org/docs/stable/generated/torch.asarray.html | pytorch docs |
device ("torch.device", optional) -- the device of the
returned tensor. Default: "None", which causes the device of
"obj" to be used.
requires_grad (bool, optional) -- whether the
returned tensor requires grad. Default: "False", which causes
the returned tensor not to require a gradient. If "True", then
the returned tensor will require a gradient, and if "obj" is
also a tensor with an autograd history then the returned
tensor will have the same history.
Example:
>>> a = torch.tensor([1, 2, 3])
>>> # Shares memory with tensor 'a'
>>> b = torch.asarray(a)
>>> a.data_ptr() == b.data_ptr()
True
>>> # Forces memory copy
>>> c = torch.asarray(a, copy=True)
>>> a.data_ptr() == c.data_ptr()
False
>>> a = torch.tensor([1, 2, 3], requires_grad=True).float()
>>> b = a + 2
>>> b
tensor([1., 2., 3.], grad_fn=<AddBackward0>)
| https://pytorch.org/docs/stable/generated/torch.asarray.html | pytorch docs |
tensor([1., 2., 3.], grad_fn=)
>>> # Shares memory with tensor 'b', with no grad
>>> c = torch.asarray(b)
>>> c
tensor([1., 2., 3.])
>>> # Shares memory with tensor 'b', retaining autograd history
>>> d = torch.asarray(b, requires_grad=True)
>>> d
tensor([1., 2., 3.], grad_fn=)
>>> array = numpy.array([1, 2, 3])
>>> # Shares memory with array 'array'
>>> t1 = torch.asarray(array)
>>> array.__array_interface__['data'][0] == t1.data_ptr()
True
>>> # Copies memory due to dtype mismatch
>>> t2 = torch.asarray(array, dtype=torch.float32)
>>> array.__array_interface__['data'][0] == t1.data_ptr()
False
| https://pytorch.org/docs/stable/generated/torch.asarray.html | pytorch docs |
torch.concat
torch.concat(tensors, dim=0, *, out=None) -> Tensor
Alias of "torch.cat()". | https://pytorch.org/docs/stable/generated/torch.concat.html | pytorch docs |
torch.argwhere
torch.argwhere(input) -> Tensor
Returns a tensor containing the indices of all non-zero elements of
"input". Each row in the result contains the indices of a non-zero
element in "input". The result is sorted lexicographically, with
the last index changing the fastest (C-style).
If "input" has n dimensions, then the resulting indices tensor
"out" is of size (z \times n), where z is the total number of non-
zero elements in the "input" tensor.
Note:
This function is similar to NumPy's *argwhere*.When "input" is on
CUDA, this function causes host-device synchronization.
Parameters:
{input} --
Example:
>>> t = torch.tensor([1, 0, 1])
>>> torch.argwhere(t)
tensor([[0],
[2]])
>>> t = torch.tensor([[1, 0, 1], [0, 1, 1]])
>>> torch.argwhere(t)
tensor([[0, 0],
[0, 2],
[1, 1],
[1, 2]])
| https://pytorch.org/docs/stable/generated/torch.argwhere.html | pytorch docs |
torch.Tensor.fill_
Tensor.fill_(value) -> Tensor
Fills "self" tensor with the specified value. | https://pytorch.org/docs/stable/generated/torch.Tensor.fill_.html | pytorch docs |
NoopObserver
class torch.quantization.observer.NoopObserver(dtype=torch.float16, custom_op_name='')
Observer that doesn't do anything and just passes its configuration
to the quantized module's ".from_float()".
Primarily used for quantization to float16 which doesn't require
determining ranges.
Parameters:
* dtype -- Quantized data type
* **custom_op_name** -- (temporary) specify this observer for an
operator that doesn't require any observation (Can be used in
Graph Mode Passes for special case ops).
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.NoopObserver.html | pytorch docs |
torch.nn.functional.hardtanh
torch.nn.functional.hardtanh(input, min_val=- 1., max_val=1., inplace=False) -> Tensor
Applies the HardTanh function element-wise. See "Hardtanh" for more
details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html | pytorch docs |
torch.foreach_frac
torch.foreach_frac(self: List[Tensor]) -> None
Apply "torch.frac()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_frac_.html | pytorch docs |
torch.nn.functional.mish
torch.nn.functional.mish(input, inplace=False)
Applies the Mish function, element-wise. Mish: A Self Regularized
Non-Monotonic Neural Activation Function.
\text{Mish}(x) = x * \text{Tanh}(\text{Softplus}(x))
Note:
See Mish: A Self Regularized Non-Monotonic Neural Activation
Function
See "Mish" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.mish.html | pytorch docs |
torch.Tensor.square_
Tensor.square_() -> Tensor
In-place version of "square()" | https://pytorch.org/docs/stable/generated/torch.Tensor.square_.html | pytorch docs |
torch.Tensor.sgn_
Tensor.sgn_() -> Tensor
In-place version of "sgn()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sgn_.html | pytorch docs |
torch.fft.fftshift
torch.fft.fftshift(input, dim=None) -> Tensor
Reorders n-dimensional FFT data, as provided by "fftn()", to have
negative frequency terms first.
This performs a periodic shift of n-dimensional data such that the
origin "(0, ..., 0)" is moved to the center of the tensor.
Specifically, to "input.shape[dim] // 2" in each selected
dimension.
Note:
By convention, the FFT returns positive frequency terms first,
followed by the negative frequencies in reverse order, so that
"f[-i]" for all 0 < i \leq n/2 in Python gives the negative
frequency terms. "fftshift()" rearranges all frequencies into
ascending order from negative to positive with the zero-frequency
term in the center.
Note:
For even lengths, the Nyquist frequency at "f[n/2]" can be
thought of as either negative or positive. "fftshift()" always
puts the Nyquist term at the 0-index. This is the same convention
used by "fftfreq()".
| https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html | pytorch docs |
used by "fftfreq()".
Parameters:
* input (Tensor) -- the tensor in FFT order
* **dim** (*int**, **Tuple**[**int**]**, **optional*) -- The
dimensions to rearrange. Only dimensions specified here will
be rearranged, any other dimensions will be left in their
original order. Default: All dimensions of "input".
-[ Example ]-
f = torch.fft.fftfreq(4)
f
tensor([ 0.0000, 0.2500, -0.5000, -0.2500])
torch.fft.fftshift(f)
tensor([-0.5000, -0.2500, 0.0000, 0.2500])
Also notice that the Nyquist frequency term at "f[2]" was moved to
the beginning of the tensor.
This also works for multi-dimensional transforms:
x = torch.fft.fftfreq(5, d=1/5) + 0.1 * torch.fft.fftfreq(5, d=1/5).unsqueeze(1)
x
tensor([[ 0.0000, 1.0000, 2.0000, -2.0000, -1.0000],
[ 0.1000, 1.1000, 2.1000, -1.9000, -0.9000],
[ 0.2000, 1.2000, 2.2000, -1.8000, -0.8000],
| https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html | pytorch docs |
[-0.2000, 0.8000, 1.8000, -2.2000, -1.2000],
[-0.1000, 0.9000, 1.9000, -2.1000, -1.1000]])
torch.fft.fftshift(x)
tensor([[-2.2000, -1.2000, -0.2000, 0.8000, 1.8000],
[-2.1000, -1.1000, -0.1000, 0.9000, 1.9000],
[-2.0000, -1.0000, 0.0000, 1.0000, 2.0000],
[-1.9000, -0.9000, 0.1000, 1.1000, 2.1000],
[-1.8000, -0.8000, 0.2000, 1.2000, 2.2000]])
"fftshift()" can also be useful for spatial data. If our data is
defined on a centered grid ("[-(N//2), (N-1)//2]") then we can use
the standard FFT defined on an uncentered grid ("[0, N)") by first
applying an "ifftshift()".
x_centered = torch.arange(-5, 5)
x_uncentered = torch.fft.ifftshift(x_centered)
fft_uncentered = torch.fft.fft(x_uncentered)
Similarly, we can convert the frequency domain components to
centered convention by applying "fftshift()".
fft_centered = torch.fft.fftshift(fft_uncentered)
| https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html | pytorch docs |
The inverse transform, from centered Fourier space back to centered
spatial data, can be performed by applying the inverse shifts in
reverse order:
x_centered_2 = torch.fft.fftshift(torch.fft.ifft(torch.fft.ifftshift(fft_centered)))
torch.testing.assert_close(x_centered.to(torch.complex64), x_centered_2, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html | pytorch docs |
float_qparams_weight_only_qconfig
torch.quantization.qconfig.float_qparams_weight_only_qconfig
alias of QConfig(activation=,
weight=functools.partial(,
dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams,
ch_axis=0){}) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float_qparams_weight_only_qconfig.html | pytorch docs |
torch.nn.functional.gumbel_softmax
torch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=- 1)
Samples from the Gumbel-Softmax distribution (Link 1 Link 2) and
optionally discretizes.
Parameters:
* logits (Tensor) -- [..., num_features] unnormalized
log probabilities
* **tau** (*float*) -- non-negative scalar temperature
* **hard** (*bool*) -- if "True", the returned samples will be
discretized as one-hot vectors, but will be differentiated as
if it is the soft sample in autograd
* **dim** (*int*) -- A dimension along which softmax will be
computed. Default: -1.
Returns:
Sampled tensor of same shape as logits from the Gumbel-Softmax
distribution. If "hard=True", the returned samples will be one-
hot, otherwise they will be probability distributions that sum
to 1 across dim.
Return type:
Tensor
Note: | https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html | pytorch docs |
Return type:
Tensor
Note:
This function is here for legacy reasons, may be removed from
nn.Functional in the future.
Note:
The main trick for *hard* is to do *y_hard - y_soft.detach() +
y_soft*It achieves two things: - makes the output value exactly
one-hot (since we add then subtract y_soft value) - makes the
gradient equal to y_soft gradient (since we strip all other
gradients)
Examples::
>>> logits = torch.randn(20, 32)
>>> # Sample soft categorical using reparametrization trick:
>>> F.gumbel_softmax(logits, tau=1, hard=False)
>>> # Sample hard categorical using "Straight-through" trick:
>>> F.gumbel_softmax(logits, tau=1, hard=True) | https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html | pytorch docs |
torch._foreach_log10
torch._foreach_log10(self: List[Tensor]) -> List[Tensor]
Apply "torch.log10()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_log10.html | pytorch docs |
torch.nn.utils.parametrize.register_parametrization
torch.nn.utils.parametrize.register_parametrization(module, tensor_name, parametrization, *, unsafe=False)
Adds a parametrization to a tensor in a module.
Assume that "tensor_name="weight"" for simplicity. When accessing
"module.weight", the module will return the parametrized version
"parametrization(module.weight)". If the original tensor requires a
gradient, the backward pass will differentiate through
"parametrization", and the optimizer will update the tensor
accordingly.
The first time that a module registers a parametrization, this
function will add an attribute "parametrizations" to the module of
type "ParametrizationList".
The list of parametrizations on the tensor "weight" will be
accessible under "module.parametrizations.weight".
The original tensor will be accessible under
"module.parametrizations.weight.original". | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html | pytorch docs |
"module.parametrizations.weight.original".
Parametrizations may be concatenated by registering several
parametrizations on the same attribute.
The training mode of a registered parametrization is updated on
registration to match the training mode of the host module
Parametrized parameters and buffers have an inbuilt caching system
that can be activated using the context manager "cached()".
A "parametrization" may optionally implement a method with
signature
def right_inverse(self, X: Tensor) -> Union[Tensor, Sequence[Tensor]]
This method is called on the unparametrized tensor when the first
parametrization is registered to compute the initial value of the
original tensor. If this method is not implemented, the original
tensor will be just the unparametrized tensor.
If all the parametrizations registered on a tensor implement
right_inverse it is possible to initialize a parametrized tensor
by assigning to it, as shown in the example below. | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html | pytorch docs |
It is possible for the first parametrization to depend on several
inputs. This may be implemented returning a tuple of tensors from
"right_inverse" (see the example implementation of a "RankOne"
parametrization below).
In this case, the unconstrained tensors are also located under
"module.parametrizations.weight" with names "original0",
"original1",...
Note:
If unsafe=False (default) both the forward and right_inverse
methods will be called once to perform a number of consistency
checks. If unsafe=True, then right_inverse will be called if the
tensor is not parametrized, and nothing will be called otherwise.
Note:
In most situations, "right_inverse" will be a function such that
"forward(right_inverse(X)) == X" (see right inverse). Sometimes,
when the parametrization is not surjective, it may be reasonable
to relax this.
Warning:
If a parametrization depends on several inputs,
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html | pytorch docs |
"register_parametrization()" will register a number of new
parameters. If such parametrization is registered after the
optimizer is created, these new parameters will need to be added
manually to the optimizer. See
"torch.Optimizer.add_param_group()".
Parameters:
* module (nn.Module) -- module on which to register the
parametrization
* **tensor_name** (*str*) -- name of the parameter or buffer on
which to register the parametrization
* **parametrization** (*nn.Module*) -- the parametrization to
register
Keyword Arguments:
unsafe (bool) -- a boolean flag that denotes whether the
parametrization may change the dtype and shape of the tensor.
Default: False Warning: the parametrization is not checked for
consistency upon registration. Enable this flag at your own
risk.
Raises:
ValueError -- if the module does not have a parameter or a
buffer named "tensor_name" | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html | pytorch docs |
buffer named "tensor_name"
Return type:
Module
-[ Examples ]-
import torch
import torch.nn as nn
import torch.nn.utils.parametrize as P
class Symmetric(nn.Module):
def forward(self, X):
return X.triu() + X.triu(1).T # Return a symmetric matrix
def right_inverse(self, A):
return A.triu()
m = nn.Linear(5, 5)
P.register_parametrization(m, "weight", Symmetric())
print(torch.allclose(m.weight, m.weight.T)) # m.weight is now symmetric
True
A = torch.rand(5, 5)
A = A + A.T # A is now symmetric
m.weight = A # Initialize the weight to be the symmetric matrix A
print(torch.allclose(m.weight, A))
True
class RankOne(nn.Module):
def forward(self, x, y):
# Form a rank 1 matrix multiplying two vectors
return x.unsqueeze(-1) @ y.unsqueeze(-2)
def right_inverse(self, Z):
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html | pytorch docs |
def right_inverse(self, Z):
# Project Z onto the rank 1 matrices
U, S, Vh = torch.linalg.svd(Z, full_matrices=False)
# Return rescaled singular vectors
s0_sqrt = S[0].sqrt().unsqueeze(-1)
return U[..., :, 0] * s0_sqrt, Vh[..., 0, :] * s0_sqrt
linear_rank_one = P.register_parametrization(nn.Linear(4, 4), "weight", RankOne())
print(torch.linalg.matrix_rank(linear_rank_one.weight).item())
1
| https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html | pytorch docs |
default_qat_qconfig_v2
torch.quantization.qconfig.default_qat_qconfig_v2
alias of QConfig(activation=functools.partial(,
observer=,
quant_min=0, quant_max=255, dtype=torch.quint8){},
weight=functools.partial(, observer=,
quant_min=-128, quant_max=127, dtype=torch.qint8,
qscheme=torch.per_tensor_symmetric){}) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qat_qconfig_v2.html | pytorch docs |
torch.Tensor.trunc
Tensor.trunc() -> Tensor
See "torch.trunc()" | https://pytorch.org/docs/stable/generated/torch.Tensor.trunc.html | pytorch docs |
torch.fft.rfft2
torch.fft.rfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor
Computes the 2-dimensional discrete Fourier transform of real
"input". Equivalent to "rfftn()" but FFTs only the last two
dimensions by default.
The FFT of a real signal is Hermitian-symmetric, "X[i, j] =
conj(X[-i, -j])", so the full "fft2()" output contains redundant
information. "rfft2()" instead omits the negative frequencies in
the last dimension.
Note:
Supports torch.half on CUDA with GPU Architecture SM53 or
greater. However it only supports powers of 2 signal length in
every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the real FFT. If a length "-1" is specified, no
| https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html | pytorch docs |
padding is done in that dimension. Default: "s =
[input.size(d) for d in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. Default: last two dimensions.
* **norm** (*str**, **optional*) --
Normalization mode. For the forward transform ("rfft2()"),
these correspond to:
* ""forward"" - normalize by "1/n"
* ""backward"" - no normalization
* ""ortho"" - normalize by "1/sqrt(n)" (making the real FFT
orthonormal)
Where "n = prod(s)" is the logical FFT size. Calling the
backward transform ("irfft2()") with the same normalization
mode will apply an overall normalization of "1/n" between the
two transforms. This is required to make "irfft2()" the exact
inverse.
Default is ""backward"" (no normalization).
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.rand(10, 10)
| https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html | pytorch docs |
-[ Example ]-
t = torch.rand(10, 10)
rfft2 = torch.fft.rfft2(t)
rfft2.size()
torch.Size([10, 6])
Compared against the full output from "fft2()", we have all
elements up to the Nyquist frequency.
fft2 = torch.fft.fft2(t)
torch.testing.assert_close(fft2[..., :6], rfft2, check_stride=False)
The discrete Fourier transform is separable, so "rfft2()" here is
equivalent to a combination of "fft()" and "rfft()":
two_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)
torch.testing.assert_close(rfft2, two_ffts, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html | pytorch docs |
GRU
class torch.ao.nn.quantized.dynamic.GRU(args, *kwargs)
Applies a multi-layer gated recurrent unit (GRU) RNN to an input
sequence.
For each element in the input sequence, each layer computes the
following function:
\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr}
h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} +
W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t +
b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 -
z_t) * n_t + z_t * h_{(t-1)} \end{array}
where h_t is the hidden state at time t, x_t is the input at time
t, h_{(t-1)} is the hidden state of the layer at time t-1 or
the initial hidden state at time 0, and r_t, z_t, n_t are the
reset, update, and new gates, respectively. \sigma is the sigmoid
function, and * is the Hadamard product.
In a multilayer GRU, the input x^{(l)}_t of the l -th layer (l >=
2) is the hidden state h^{(l-1)}_t of the previous layer multiplied | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html | pytorch docs |
by dropout \delta^{(l-1)}_t where each \delta^{(l-1)}_t is a
Bernoulli random variable which is 0 with probability "dropout".
Parameters:
* input_size -- The number of expected features in the input
x
* **hidden_size** -- The number of features in the hidden state
*h*
* **num_layers** -- Number of recurrent layers. E.g., setting
"num_layers=2" would mean stacking two GRUs together to form a
*stacked GRU*, with the second GRU taking in outputs of the
first GRU and computing the final results. Default: 1
* **bias** -- If "False", then the layer does not use bias
weights *b_ih* and *b_hh*. Default: "True"
* **batch_first** -- If "True", then the input and output
tensors are provided as (batch, seq, feature). Default:
"False"
* **dropout** -- If non-zero, introduces a *Dropout* layer on
the outputs of each GRU layer except the last layer, with
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html | pytorch docs |
dropout probability equal to "dropout". Default: 0
* **bidirectional** -- If "True", becomes a bidirectional GRU.
Default: "False"
Inputs: input, h_0
* input of shape (seq_len, batch, input_size): tensor
containing the features of the input sequence. The input can
also be a packed variable length sequence. See
"torch.nn.utils.rnn.pack_padded_sequence()" for details.
* **h_0** of shape *(num_layers * num_directions, batch,
hidden_size)*: tensor containing the initial hidden state for
each element in the batch. Defaults to zero if not provided.
If the RNN is bidirectional, num_directions should be 2, else
it should be 1.
Outputs: output, h_n
* output of shape (seq_len, batch, num_directions *
hidden_size): tensor containing the output features h_t from
the last layer of the GRU, for each t. If a
"torch.nn.utils.rnn.PackedSequence" has been given as the | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html | pytorch docs |
input, the output will also be a packed sequence. For the
unpacked case, the directions can be separated using
"output.view(seq_len, batch, num_directions, hidden_size)",
with forward and backward being direction 0 and 1
respectively.
Similarly, the directions can be separated in the packed case.
* **h_n** of shape *(num_layers * num_directions, batch,
hidden_size)*: tensor containing the hidden state for *t =
seq_len*
Like *output*, the layers can be separated using
"h_n.view(num_layers, num_directions, batch, hidden_size)".
Shape:
* Input1: (L, N, H_{in}) tensor containing input features where
H_{in}=\text{input_size} and L represents a sequence
length.
* Input2: (S, N, H_{out}) tensor containing the initial hidden
state for each element in the batch.
H_{out}=\text{hidden\_size} Defaults to zero if not provided.
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html | pytorch docs |
where S=\text{num_layers} * \text{num_directions} If the RNN
is bidirectional, num_directions should be 2, else it should
be 1.
* Output1: (L, N, H_{all}) where H_{all}=\text{num\_directions}
* \text{hidden\_size}
* Output2: (S, N, H_{out}) tensor containing the next hidden
state for each element in the batch
Variables:
* weight_ih_l[k] -- the learnable input-hidden weights of
the \text{k}^{th} layer (W_ir|W_iz|W_in), of shape
(3hidden_size, input_size) for k = 0. Otherwise, the
shape is (3hidden_size, num_directions * hidden_size)
* **weight_hh_l[k]** -- the learnable hidden-hidden weights of
the \text{k}^{th} layer (W_hr|W_hz|W_hn), of shape
*(3*hidden_size, hidden_size)*
* **bias_ih_l[k]** -- the learnable input-hidden bias of the
\text{k}^{th} layer (b_ir|b_iz|b_in), of shape
*(3*hidden_size)*
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html | pytorch docs |
(3hidden_size)*
* **bias_hh_l[k]** -- the learnable hidden-hidden bias of the
\text{k}^{th} layer (b_hr|b_hz|b_hn), of shape
*(3*hidden_size)*
Note:
All the weights and biases are initialized from
\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{hidden\_size}}
Note:
If the following conditions are satisfied: 1) cudnn is enabled,
2) input data is on the GPU 3) input data has dtype
"torch.float16" 4) V100 GPU is used, 5) input data is not in
"PackedSequence" format persistent algorithm can be selected to
improve performance.
Examples:
>>> rnn = nn.GRU(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html | pytorch docs |
update_bn_stats
class torch.ao.nn.intrinsic.qat.update_bn_stats(mod) | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.update_bn_stats.html | pytorch docs |
torch.Tensor.crow_indices
Tensor.crow_indices() -> IntTensor
Returns the tensor containing the compressed row indices of the
"self" tensor when "self" is a sparse CSR tensor of layout
"sparse_csr". The "crow_indices" tensor is strictly of shape
("self".size(0) + 1) and of type "int32" or "int64". When using MKL
routines such as sparse matrix multiplication, it is necessary to
use "int32" indexing in order to avoid downcasting and potentially
losing information.
Example::
>>> csr = torch.eye(5,5).to_sparse_csr()
>>> csr.crow_indices()
tensor([0, 1, 2, 3, 4, 5], dtype=torch.int32) | https://pytorch.org/docs/stable/generated/torch.Tensor.crow_indices.html | pytorch docs |
torch.Tensor.add
Tensor.add(other, *, alpha=1) -> Tensor
Add a scalar or tensor to "self" tensor. If both "alpha" and
"other" are specified, each element of "other" is scaled by "alpha"
before being used.
When "other" is a tensor, the shape of "other" must be
broadcastable with the shape of the underlying tensor
See "torch.add()" | https://pytorch.org/docs/stable/generated/torch.Tensor.add.html | pytorch docs |
torch.cuda.get_device_name
torch.cuda.get_device_name(device=None)
Gets the name of a device.
Parameters:
device (torch.device or int, optional) -- device
for which to return the name. This function is a no-op if this
argument is a negative integer. It uses the current device,
given by "current_device()", if "device" is "None" (default).
Returns:
the name of the device
Return type:
str | https://pytorch.org/docs/stable/generated/torch.cuda.get_device_name.html | pytorch docs |
torch.Tensor.reciprocal
Tensor.reciprocal() -> Tensor
See "torch.reciprocal()" | https://pytorch.org/docs/stable/generated/torch.Tensor.reciprocal.html | pytorch docs |
torch.autograd.functional.vjp
torch.autograd.functional.vjp(func, inputs, v=None, create_graph=False, strict=False)
Function that computes the dot product between a vector "v" and the
Jacobian of the given function at the point given by the inputs.
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a tuple of Tensors or a Tensor.
* **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the
function "func".
* **v** (*tuple of Tensors** or **Tensor*) -- The vector for
which the vector Jacobian product is computed. Must be the
same size as the output of "func". This argument is optional
when the output of "func" contains a single element and (if it
is not provided) will be set as a Tensor containing a single
"1".
* **create_graph** (*bool**, **optional*) -- If "True", both the
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html | pytorch docs |
output and result will be computed in a differentiable way.
Note that when "strict" is "False", the result can not require
gradients or be disconnected from the inputs. Defaults to
"False".
* **strict** (*bool**, **optional*) -- If "True", an error will
be raised when we detect that there exists an input such that
all the outputs are independent of it. If "False", we return a
Tensor of zeros as the vjp for said inputs, which is the
expected mathematical value. Defaults to "False".
Returns:
tuple with:
func_output (tuple of Tensors or Tensor): output of
"func(inputs)"
vjp (tuple of Tensors or Tensor): result of the dot product
with the same shape as the inputs.
Return type:
output (tuple)
-[ Example ]-
def exp_reducer(x):
... return x.exp().sum(dim=1)
inputs = torch.rand(4, 4)
v = torch.ones(4)
vjp(exp_reducer, inputs, v)
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html | pytorch docs |
vjp(exp_reducer, inputs, v)
(tensor([5.7817, 7.2458, 5.7830, 6.7782]),
tensor([[1.4458, 1.3962, 1.3042, 1.6354],
[2.1288, 1.0652, 1.5483, 2.5035],
[2.2046, 1.1292, 1.1432, 1.3059],
[1.3225, 1.6652, 1.7753, 2.0152]]))
vjp(exp_reducer, inputs, v, create_graph=True)
(tensor([5.7817, 7.2458, 5.7830, 6.7782], grad_fn=),
tensor([[1.4458, 1.3962, 1.3042, 1.6354],
[2.1288, 1.0652, 1.5483, 2.5035],
[2.2046, 1.1292, 1.1432, 1.3059],
[1.3225, 1.6652, 1.7753, 2.0152]], grad_fn=))
def adder(x, y):
... return 2 * x + 3 * y
inputs = (torch.rand(2), torch.rand(2))
v = torch.ones(2)
vjp(adder, inputs, v)
(tensor([2.4225, 2.3340]),
(tensor([2., 2.]), tensor([3., 3.])))
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html | pytorch docs |
torch.nan_to_num
torch.nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None) -> Tensor
Replaces "NaN", positive infinity, and negative infinity values in
"input" with the values specified by "nan", "posinf", and "neginf",
respectively. By default, "NaN"s are replaced with zero, positive
infinity is replaced with the greatest finite value representable
by "input"'s dtype, and negative infinity is replaced with the
least finite value representable by "input"'s dtype.
Parameters:
* input (Tensor) -- the input tensor.
* **nan** (*Number**, **optional*) -- the value to replace
"NaN"s with. Default is zero.
* **posinf** (*Number**, **optional*) -- if a Number, the value
to replace positive infinity values with. If None, positive
infinity values are replaced with the greatest finite value
representable by "input"'s dtype. Default is None.
| https://pytorch.org/docs/stable/generated/torch.nan_to_num.html | pytorch docs |
neginf (Number, optional) -- if a Number, the value
to replace negative infinity values with. If None, negative
infinity values are replaced with the lowest finite value
representable by "input"'s dtype. Default is None.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.tensor([float('nan'), float('inf'), -float('inf'), 3.14])
>>> torch.nan_to_num(x)
tensor([ 0.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])
>>> torch.nan_to_num(x, nan=2.0)
tensor([ 2.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])
>>> torch.nan_to_num(x, nan=2.0, posinf=1.0)
tensor([ 2.0000e+00, 1.0000e+00, -3.4028e+38, 3.1400e+00])
| https://pytorch.org/docs/stable/generated/torch.nan_to_num.html | pytorch docs |
FractionalMaxPool3d
class torch.nn.FractionalMaxPool3d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)
Applies a 3D fractional max pooling over an input signal composed
of several input planes.
Fractional MaxPooling is described in detail in the paper
Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in kT \times kH \times kW
regions by a stochastic step size determined by the target output
size. The number of output features is equal to the number of input
planes.
Parameters:
* kernel_size (Union[int, Tuple[int, int,
int]]) -- the size of the window to take a max over.
Can be a single number k (for a square kernel of k x k x k) or
a tuple (kt x kh x kw)
* **output_size** (*Union**[**int**, **Tuple**[**int**, **int**,
**int**]**]*) -- the target output size of the image of the
| https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html | pytorch docs |
form oT x oH x oW. Can be a tuple (oT, oH, oW) or a single
number oH for a square image oH x oH x oH
* **output_ratio** (*Union**[**float**, **Tuple**[**float**,
**float**, **float**]**]*) -- If one wants to have an output
size as a ratio of the input size, this option can be given.
This has to be a number or tuple in the range (0, 1)
* **return_indices** (*bool*) -- if "True", will return the
indices along with the outputs. Useful to pass to
"nn.MaxUnpool3d()". Default: "False"
Shape:
* Input: (N, C, T_{in}, H_{in}, W_{in}) or (C, T_{in}, H_{in},
W_{in}).
* Output: (N, C, T_{out}, H_{out}, W_{out}) or (C, T_{out},
H_{out}, W_{out}), where (T_{out}, H_{out},
W_{out})=\text{output\_size} or (T_{out}, H_{out},
W_{out})=\text{output\_ratio} \times (T_{in}, H_{in}, W_{in})
-[ Examples ]-
pool of cubic window of size=3, and target output size 13x12x11
| https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html | pytorch docs |
m = nn.FractionalMaxPool3d(3, output_size=(13, 12, 11))
pool of cubic window and target output size being half of input size
m = nn.FractionalMaxPool3d(3, output_ratio=(0.5, 0.5, 0.5))
input = torch.randn(20, 16, 50, 32, 16)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html | pytorch docs |
torch.Tensor.ger
Tensor.ger(vec2) -> Tensor
See "torch.ger()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ger.html | pytorch docs |
torch.Tensor.col_indices
Tensor.col_indices() -> IntTensor
Returns the tensor containing the column indices of the "self"
tensor when "self" is a sparse CSR tensor of layout "sparse_csr".
The "col_indices" tensor is strictly of shape ("self".nnz()) and of
type "int32" or "int64". When using MKL routines such as sparse
matrix multiplication, it is necessary to use "int32" indexing in
order to avoid downcasting and potentially losing information.
Example::
>>> csr = torch.eye(5,5).to_sparse_csr()
>>> csr.col_indices()
tensor([0, 1, 2, 3, 4], dtype=torch.int32) | https://pytorch.org/docs/stable/generated/torch.Tensor.col_indices.html | pytorch docs |
torch.Tensor.uniform_
Tensor.uniform_(from=0, to=1) -> Tensor
Fills "self" tensor with numbers sampled from the continuous
uniform distribution:
P(x) = \dfrac{1}{\text{to} - \text{from}}
| https://pytorch.org/docs/stable/generated/torch.Tensor.uniform_.html | pytorch docs |
torch.Tensor.device
Tensor.device
Is the "torch.device" where this Tensor is. | https://pytorch.org/docs/stable/generated/torch.Tensor.device.html | pytorch docs |
torch.Tensor.unfold
Tensor.unfold(dimension, size, step) -> Tensor
Returns a view of the original tensor which contains all slices of
size "size" from "self" tensor in the dimension "dimension".
Step between two slices is given by "step".
If sizedim is the size of dimension "dimension" for "self", the
size of dimension "dimension" in the returned tensor will be
(sizedim - size) / step + 1.
An additional dimension of size "size" is appended in the returned
tensor.
Parameters:
* dimension (int) -- dimension in which unfolding happens
* **size** (*int*) -- the size of each slice that is unfolded
* **step** (*int*) -- the step between each slice
Example:
>>> x = torch.arange(1., 8)
>>> x
tensor([ 1., 2., 3., 4., 5., 6., 7.])
>>> x.unfold(0, 2, 1)
tensor([[ 1., 2.],
[ 2., 3.],
[ 3., 4.],
[ 4., 5.],
[ 5., 6.],
| https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html | pytorch docs |
[ 5., 6.],
[ 6., 7.]])
>>> x.unfold(0, 2, 2)
tensor([[ 1., 2.],
[ 3., 4.],
[ 5., 6.]]) | https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html | pytorch docs |
torch._foreach_log
torch._foreach_log(self: List[Tensor]) -> List[Tensor]
Apply "torch.log()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_log.html | pytorch docs |
torch.digamma
torch.digamma(input, *, out=None) -> Tensor
Alias for "torch.special.digamma()". | https://pytorch.org/docs/stable/generated/torch.digamma.html | pytorch docs |
Tanh
class torch.nn.Tanh
Applies the Hyperbolic Tangent (Tanh) function element-wise.
Tanh is defined as:
\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) +
\exp(-x)}
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Tanh()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Tanh.html | pytorch docs |
torch.trapz
torch.trapz(y, x, *, dim=- 1) -> Tensor
Alias for "torch.trapezoid()". | https://pytorch.org/docs/stable/generated/torch.trapz.html | pytorch docs |
torch.Tensor.geometric_
Tensor.geometric_(p, *, generator=None) -> Tensor
Fills "self" tensor with elements drawn from the geometric
distribution:
f(X=k) = (1 - p)^{k - 1} p
| https://pytorch.org/docs/stable/generated/torch.Tensor.geometric_.html | pytorch docs |
torch.nn.functional.layer_norm
torch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)
Applies Layer Normalization for last certain number of dimensions.
See "LayerNorm" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.layer_norm.html | pytorch docs |
torch.isinf
torch.isinf(input) -> Tensor
Tests if each element of "input" is infinite (positive or negative
infinity) or not.
Note:
Complex values are infinite when their real or imaginary part is
infinite.
Parameters:
input (Tensor) -- the input tensor.
Returns:
A boolean tensor that is True where "input" is infinite and
False elsewhere
Example:
>>> torch.isinf(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([False, True, False, True, False])
| https://pytorch.org/docs/stable/generated/torch.isinf.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.