text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
Attribute
class torch.jit.Attribute(value, type)
This method is a pass-through function that returns value, mostly
used to indicate to the TorchScript compiler that the left-hand
side expression is a class instance attribute with type of type.
Note that torch.jit.Attribute should only be used in init
method of jit.ScriptModule subclasses.
Though TorchScript can infer correct type for most Python
expressions, there are some cases where type inference can be
wrong, including:
Empty containers like [] and {}, which TorchScript assumes to
be container of Tensor
Optional types like Optional[T] but assigned a valid value of
type T, TorchScript would assume it is type T rather than
Optional[T]
In eager mode, it is simply a pass-through function that returns
value without other implications.
Example:
import torch
from typing import Dict
class AttributeModule(torch.jit.ScriptModule):
| https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html | pytorch docs |
def init(self):
super(AttributeModule, self).init()
self.foo = torch.jit.Attribute(0.1, float)
# we should be able to use self.foo as a float here
assert 0.0 < self.foo
self.names_ages = torch.jit.Attribute({}, Dict[str, int])
self.names_ages["someone"] = 20
assert isinstance(self.names_ages["someone"], int)
m = AttributeModule()
# m will contain two attributes
# 1. foo of type float
# 2. names_ages of type Dict[str, int]
Note: it's now preferred to instead use type annotations instead of
torch.jit.Annotate:
import torch
from typing import Dict
class AttributeModule(torch.nn.Module):
names: Dict[str, int]
def __init__(self):
super(AttributeModule, self).__init__()
self.names = {}
m = AttributeModule()
Parameters:
* value -- An initial value to be assigned to attribute. | https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html | pytorch docs |
type -- A Python type
Returns:
Returns value
count(value, /)
Return number of occurrences of value.
index(value, start=0, stop=9223372036854775807, /)
Return first index of value.
Raises ValueError if the value is not present.
type
Alias for field number 1
value
Alias for field number 0
| https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html | pytorch docs |
torch.Tensor.long
Tensor.long(memory_format=torch.preserve_format) -> Tensor
"self.long()" is equivalent to "self.to(torch.int64)". See "to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.long.html | pytorch docs |
torch.Tensor.mvlgamma
Tensor.mvlgamma(p) -> Tensor
See "torch.mvlgamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.mvlgamma.html | pytorch docs |
torch.Tensor.nan_to_num_
Tensor.nan_to_num_(nan=0.0, posinf=None, neginf=None) -> Tensor
In-place version of "nan_to_num()". | https://pytorch.org/docs/stable/generated/torch.Tensor.nan_to_num_.html | pytorch docs |
ConvBn3d
class torch.ao.nn.intrinsic.ConvBn3d(conv, bn)
This is a sequential container which calls the Conv 3d and Batch
Norm 3d modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn3d.html | pytorch docs |
torch.Tensor.argmin
Tensor.argmin(dim=None, keepdim=False) -> LongTensor
See "torch.argmin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.argmin.html | pytorch docs |
torch.asinh
torch.asinh(input, *, out=None) -> Tensor
Returns a new tensor with the inverse hyperbolic sine of the
elements of "input".
\text{out}_{i} = \sinh^{-1}(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.1606, -1.4267, -1.0899, -1.0250 ])
>>> torch.asinh(a)
tensor([ 0.1599, -1.1534, -0.9435, -0.8990 ])
| https://pytorch.org/docs/stable/generated/torch.asinh.html | pytorch docs |
torch.signal.windows.kaiser
torch.signal.windows.kaiser(M, *, beta=12.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the Kaiser window.
The Kaiser window is defined as follows:
w_n = I_0 \left( \beta \sqrt{1 - \left( {\frac{n - N/2}{N/2}}
\right) ^2 } \right) / I_0( \beta )
where "I_0" is the zeroth order modified Bessel function of the
first kind (see "torch.special.i0()"), and "N = M - 1 if sym else
M".
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* beta (float, optional) -- shape parameter for the
window. Must be non-negative. Default: 12.0
* **sym** (*bool**, **optional*) -- If *False*, returns a
| https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html | pytorch docs |
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples: | https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html | pytorch docs |
Return type:
Tensor
Examples:
>>> # Generates a symmetric gaussian window with a standard deviation of 1.0.
>>> torch.signal.windows.kaiser(5)
tensor([4.0065e-05, 2.1875e-03, 4.3937e-02, 3.2465e-01, 8.8250e-01, 8.8250e-01, 3.2465e-01, 4.3937e-02, 2.1875e-03, 4.0065e-05])
>>> # Generates a periodic gaussian window and standard deviation equal to 0.9.
>>> torch.signal.windows.kaiser(5, sym=False,std=0.9)
tensor([1.9858e-07, 5.1365e-05, 3.8659e-03, 8.4658e-02, 5.3941e-01, 1.0000e+00, 5.3941e-01, 8.4658e-02, 3.8659e-03, 5.1365e-05])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html | pytorch docs |
torch.linalg.cross
torch.linalg.cross(input, other, *, dim=- 1, out=None) -> Tensor
Computes the cross product of two 3-dimensional vectors.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of vectors, for which it computes the product
along the dimension "dim". It broadcasts over the batch dimensions.
Parameters:
* input (Tensor) -- the first input tensor.
* **other** (*Tensor*) -- the second input tensor.
* **dim** (*int**, **optional*) -- the dimension along which to
take the cross-product. Default: *-1*.
Keyword Arguments:
out (Tensor, optional) -- the output tensor. Ignored
if None. Default: None.
-[ Example ]-
a = torch.randn(4, 3)
a
tensor([[-0.3956, 1.1455, 1.6895],
[-0.5849, 1.3672, 0.3599],
[-1.1626, 0.7180, -0.0521],
[-0.1339, 0.9902, -2.0225]])
b = torch.randn(4, 3)
b
| https://pytorch.org/docs/stable/generated/torch.linalg.cross.html | pytorch docs |
b = torch.randn(4, 3)
b
tensor([[-0.0257, -1.4725, -1.2251],
[-1.1479, -0.7005, -1.9757],
[-1.3904, 0.3726, -1.1836],
[-0.9688, -0.7153, 0.2159]])
torch.linalg.cross(a, b)
tensor([[ 1.0844, -0.5281, 0.6120],
[-2.4490, -1.5687, 1.9792],
[-0.8304, -1.3037, 0.5650],
[-1.2329, 1.9883, 1.0551]])
a = torch.randn(1, 3) # a is broadcast to match shape of b
a
tensor([[-0.9941, -0.5132, 0.5681]])
torch.linalg.cross(a, b)
tensor([[ 1.4653, -1.2325, 1.4507],
[ 1.4119, -2.6163, 0.1073],
[ 0.3957, -1.9666, -1.0840],
[ 0.2956, -0.3357, 0.2139]])
| https://pytorch.org/docs/stable/generated/torch.linalg.cross.html | pytorch docs |
torch.combinations
torch.combinations(input, r=2, with_replacement=False) -> seq
Compute combinations of length r of the given tensor. The behavior
is similar to python's itertools.combinations when
with_replacement is set to False, and
itertools.combinations_with_replacement when with_replacement
is set to True.
Parameters:
* input (Tensor) -- 1D vector.
* **r** (*int**, **optional*) -- number of elements to combine
* **with_replacement** (*bool**, **optional*) -- whether to
allow duplication in combination
Returns:
A tensor equivalent to converting all the input tensors into
lists, do itertools.combinations or
itertools.combinations_with_replacement on these lists, and
finally convert the resulting list into tensor.
Return type:
Tensor
Example:
>>> a = [1, 2, 3]
>>> list(itertools.combinations(a, r=2))
[(1, 2), (1, 3), (2, 3)]
| https://pytorch.org/docs/stable/generated/torch.combinations.html | pytorch docs |
[(1, 2), (1, 3), (2, 3)]
>>> list(itertools.combinations(a, r=3))
[(1, 2, 3)]
>>> list(itertools.combinations_with_replacement(a, r=2))
[(1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3)]
>>> tensor_a = torch.tensor(a)
>>> torch.combinations(tensor_a)
tensor([[1, 2],
[1, 3],
[2, 3]])
>>> torch.combinations(tensor_a, r=3)
tensor([[1, 2, 3]])
>>> torch.combinations(tensor_a, with_replacement=True)
tensor([[1, 1],
[1, 2],
[1, 3],
[2, 2],
[2, 3],
[3, 3]]) | https://pytorch.org/docs/stable/generated/torch.combinations.html | pytorch docs |
Unfold
class torch.nn.Unfold(kernel_size, dilation=1, padding=0, stride=1)
Extracts sliding local blocks from a batched input tensor.
Consider a batched "input" tensor of shape (N, C, *), where N is
the batch dimension, C is the channel dimension, and * represent
arbitrary spatial dimensions. This operation flattens each sliding
"kernel_size"-sized block within the spatial dimensions of "input"
into a column (i.e., last dimension) of a 3-D "output" tensor of
shape (N, C \times \prod(\text{kernel_size}), L), where C \times
\prod(\text{kernel_size}) is the total number of values within
each block (a block has \prod(\text{kernel_size}) spatial
locations each containing a C-channeled vector), and L is the total
number of such blocks:
L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times
\text{padding}[d] % - \text{dilation}[d] \times
(\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} +
1\right\rfloor,
| https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html | pytorch docs |
1\right\rfloor,
where \text{spatial_size} is formed by the spatial dimensions of
"input" (* above), and d is over all spatial dimensions.
Therefore, indexing "output" at the last dimension (column
dimension) gives all values within a certain block.
The "padding", "stride" and "dilation" arguments specify how the
sliding blocks are retrieved.
"stride" controls the stride for the sliding blocks.
"padding" controls the amount of implicit zero-paddings on both
sides for "padding" number of points for each dimension before
reshaping.
"dilation" controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but
this link has a nice visualization of what "dilation" does.
Parameters:
* kernel_size (int or tuple) -- the size of the
sliding blocks
* **dilation** (*int** or **tuple**, **optional*) -- a parameter
| https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html | pytorch docs |
that controls the stride of elements within the neighborhood.
Default: 1
* **padding** (*int** or **tuple**, **optional*) -- implicit
zero padding to be added on both sides of input. Default: 0
* **stride** (*int** or **tuple**, **optional*) -- the stride of
the sliding blocks in the input spatial dimensions. Default: 1
If "kernel_size", "dilation", "padding" or "stride" is an int or
a tuple of length 1, their values will be replicated across all
spatial dimensions.
For the case of two input spatial dimensions this operation is
sometimes called "im2col".
Note:
"Fold" calculates each combined value in the resulting large
tensor by summing all values from all containing blocks. "Unfold"
extracts the values in the local blocks by copying from the large
tensor. So, if the blocks overlap, they are not inverses of each
other.In general, folding and unfolding operations are related as
| https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html | pytorch docs |
follows. Consider "Fold" and "Unfold" instances created with the
same parameters:
>>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)
>>> fold = nn.Fold(output_size=..., **fold_params)
>>> unfold = nn.Unfold(**fold_params)
Then for any (supported) "input" tensor the following equality
holds:
fold(unfold(input)) == divisor * input
where "divisor" is a tensor that depends only on the shape and
dtype of the "input":
>>> input_ones = torch.ones(input.shape, dtype=input.dtype)
>>> divisor = fold(unfold(input_ones))
When the "divisor" tensor contains no zero elements, then "fold"
and "unfold" operations are inverses of each other (up to
constant divisor).
Warning:
Currently, only 4-D input tensors (batched image-like tensors)
are supported.
Shape:
* Input: (N, C, *)
* Output: (N, C \times \prod(\text{kernel\_size}), L) as
described above
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html | pytorch docs |
described above
Examples:
>>> unfold = nn.Unfold(kernel_size=(2, 3))
>>> input = torch.randn(2, 5, 3, 4)
>>> output = unfold(input)
>>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels)
>>> # 4 blocks (2x3 kernels) in total in the 3x4 input
>>> output.size()
torch.Size([2, 30, 4])
>>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)
>>> inp = torch.randn(1, 3, 10, 12)
>>> w = torch.randn(2, 3, 4, 5)
>>> inp_unf = torch.nn.functional.unfold(inp, (4, 5))
>>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)
>>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1))
>>> # or equivalently (and avoiding a copy),
>>> # out = out_unf.view(1, 2, 7, 8)
>>> (torch.nn.functional.conv2d(inp, w) - out).abs().max()
tensor(1.9073e-06)
| https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html | pytorch docs |
torch.unique_consecutive
torch.unique_consecutive(args, *kwargs)
Eliminates all but the first element from every consecutive group
of equivalent elements.
Note:
This function is different from "torch.unique()" in the sense
that this function only eliminates consecutive duplicate values.
This semantics is similar to *std::unique* in C++.
Parameters:
* input (Tensor) -- the input tensor
* **return_inverse** (*bool*) -- Whether to also return the
indices for where elements in the original input ended up in
the returned unique list.
* **return_counts** (*bool*) -- Whether to also return the
counts for each unique element.
* **dim** (*int*) -- the dimension to apply unique. If "None",
the unique of the flattened input is returned. default: "None"
Returns:
A tensor or a tuple of tensors containing
* **output** (*Tensor*): the output list of unique scalar
| https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html | pytorch docs |
elements.
* **inverse_indices** (*Tensor*): (optional) if
"return_inverse" is True, there will be an additional
returned tensor (same shape as input) representing the
indices for where elements in the original input map to in
the output; otherwise, this function will only return a
single tensor.
* **counts** (*Tensor*): (optional) if "return_counts" is
True, there will be an additional returned tensor (same
shape as output or output.size(dim), if dim was specified)
representing the number of occurrences for each unique
value or tensor.
Return type:
(Tensor, Tensor (optional), Tensor (optional))
Example:
>>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])
>>> output = torch.unique_consecutive(x)
>>> output
tensor([1, 2, 3, 1, 2])
>>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)
>>> output
| https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html | pytorch docs |
output
tensor([1, 2, 3, 1, 2])
>>> inverse_indices
tensor([0, 0, 1, 1, 2, 3, 3, 4])
>>> output, counts = torch.unique_consecutive(x, return_counts=True)
>>> output
tensor([1, 2, 3, 1, 2])
>>> counts
tensor([2, 2, 1, 2, 1])
| https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html | pytorch docs |
torch.trace
torch.trace(input) -> Tensor
Returns the sum of the elements of the diagonal of the input 2-D
matrix.
Example:
>>> x = torch.arange(1., 10.).view(3, 3)
>>> x
tensor([[ 1., 2., 3.],
[ 4., 5., 6.],
[ 7., 8., 9.]])
>>> torch.trace(x)
tensor(15.)
| https://pytorch.org/docs/stable/generated/torch.trace.html | pytorch docs |
SoftMarginLoss
class torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean')
Creates a criterion that optimizes a two-class classification
logistic loss between input tensor x and target tensor y
(containing 1 or -1).
\text{loss}(x, y) = \sum_i \frac{\log(1 +
\exp(-y[i]*x[i]))}{\text{x.nelement}()}
Parameters:
* size_average (bool, optional) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
| https://pytorch.org/docs/stable/generated/torch.nn.SoftMarginLoss.html | pytorch docs |
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (*), where * means any number of dimensions.
* Target: (*), same shape as the input.
* Output: scalar. If "reduction" is "'none'", then (*), same
shape as input.
| https://pytorch.org/docs/stable/generated/torch.nn.SoftMarginLoss.html | pytorch docs |
get_default_qat_qconfig_mapping
class torch.ao.quantization.qconfig_mapping.get_default_qat_qconfig_mapping(backend='x86', version=1)
Return the default QConfigMapping for quantization aware training.
Parameters:
* backend (***) -- the quantization backend for the default
qconfig mapping, should be one of ["x86" (default), "fbgemm",
"qnnpack", "onednn"]
* **version** (***) -- the version for the default qconfig
mapping
Return type:
QConfigMapping | https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.get_default_qat_qconfig_mapping.html | pytorch docs |
torch.Tensor.to_sparse_bsr
Tensor.to_sparse_bsr(blocksize, dense_dim) -> Tensor
Convert a tensor to a block sparse row (BSR) storage format of
given blocksize. If the "self" is strided, then the number of
dense dimensions could be specified, and a hybrid BSR tensor will
be created, with dense_dim dense dimensions and self.dim() - 2 -
dense_dim batch dimension.
Parameters:
* blocksize (list, tuple, "torch.Size", optional) -- Block
size of the resulting BSR tensor. A block size must be a tuple
of length two such that its items evenly divide the two sparse
dimensions.
* **dense_dim** (*int**, **optional*) -- Number of dense
dimensions of the resulting BSR tensor. This argument should
be used only if "self" is a strided tensor, and must be a
value between 0 and dimension of "self" tensor minus two.
Example:
>>> dense = torch.randn(10, 10)
>>> sparse = dense.to_sparse_csr()
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsr.html | pytorch docs |
sparse = dense.to_sparse_csr()
>>> sparse_bsr = sparse.to_sparse_bsr((5, 5))
>>> sparse_bsr.col_indices()
tensor([0, 1, 0, 1])
>>> dense = torch.zeros(4, 3, 1)
>>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1
>>> dense.to_sparse_bsr((2, 1), 1)
tensor(crow_indices=tensor([0, 2, 3]),
col_indices=tensor([0, 2, 1]),
values=tensor([[[[1.]],
[[1.]]],
[[[1.]],
[[1.]]],
[[[1.]],
[[1.]]]]), size=(4, 3, 1), nnz=3,
layout=torch.sparse_bsr)
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsr.html | pytorch docs |
torch.Tensor.inner
Tensor.inner(other) -> Tensor
See "torch.inner()". | https://pytorch.org/docs/stable/generated/torch.Tensor.inner.html | pytorch docs |
torch.index_select
torch.index_select(input, dim, index, *, out=None) -> Tensor
Returns a new tensor which indexes the "input" tensor along
dimension "dim" using the entries in "index" which is a
LongTensor.
The returned tensor has the same number of dimensions as the
original tensor ("input"). The "dim"th dimension has the same size
as the length of "index"; other dimensions have the same size as in
the original tensor.
Note:
The returned tensor does **not** use the same storage as the
original tensor. If "out" has a different shape than expected,
we silently change it to the correct shape, reallocating the
underlying storage if necessary.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension in which we index
* **index** (*IntTensor** or **LongTensor*) -- the 1-D tensor
containing the indices to index
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.index_select.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
[-0.4664, 0.2647, -0.1228, -1.1068],
[-1.1734, -0.6571, 0.7230, -0.6004]])
>>> indices = torch.tensor([0, 2])
>>> torch.index_select(x, 0, indices)
tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],
[-1.1734, -0.6571, 0.7230, -0.6004]])
>>> torch.index_select(x, 1, indices)
tensor([[ 0.1427, -0.5414],
[-0.4664, -0.1228],
[-1.1734, 0.7230]])
| https://pytorch.org/docs/stable/generated/torch.index_select.html | pytorch docs |
torch.igammac
torch.igammac(input, other, *, out=None) -> Tensor
Alias for "torch.special.gammaincc()". | https://pytorch.org/docs/stable/generated/torch.igammac.html | pytorch docs |
Dropout2d
class torch.nn.Dropout2d(p=0.5, inplace=False)
Randomly zero out entire channels (a channel is a 2D feature map,
e.g., the j-th channel of the i-th sample in the batched input is a
2D tensor \text{input}[i, j]). Each channel will be zeroed out
independently on every forward call with probability "p" using
samples from a Bernoulli distribution.
Usually the input comes from "nn.Conv2d" modules.
As described in the paper Efficient Object Localization Using
Convolutional Networks , if adjacent pixels within feature maps are
strongly correlated (as is normally the case in early convolution
layers) then i.i.d. dropout will not regularize the activations and
will otherwise just result in an effective learning rate decrease.
In this case, "nn.Dropout2d()" will help promote independence
between feature maps and should be used instead.
Parameters:
* p (float, optional) -- probability of an element to
be zero-ed. | https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html | pytorch docs |
be zero-ed.
* **inplace** (*bool**, **optional*) -- If set to "True", will
do this operation in-place
Warning:
Due to historical reasons, this class will perform 1D channel-
wise dropout for 3D inputs (as done by "nn.Dropout1d"). Thus, it
currently does NOT support inputs without a batch dimension of
shape (C, H, W). This behavior will change in a future release to
interpret 3D inputs as no-batch-dim inputs. To maintain the old
behavior, switch to "nn.Dropout1d".
Shape:
* Input: (N, C, H, W) or (N, C, L).
* Output: (N, C, H, W) or (N, C, L) (same shape as input).
Examples:
>>> m = nn.Dropout2d(p=0.2)
>>> input = torch.randn(20, 16, 32, 32)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html | pytorch docs |
torch.Tensor.logical_not_
Tensor.logical_not_() -> Tensor
In-place version of "logical_not()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_not_.html | pytorch docs |
torch.linalg.svd
torch.linalg.svd(A, full_matrices=True, *, driver=None, out=None)
Computes the singular value decomposition (SVD) of a matrix.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the full SVD of
a matrix A \in \mathbb{K}^{m \times n}, if k = min(m,n), is
defined as
A = U \operatorname{diag}(S) V^{\text{H}} \mathrlap{\qquad U \in
\mathbb{K}^{m \times m}, S \in \mathbb{R}^k, V \in \mathbb{K}^{n
\times n}}
where \operatorname{diag}(S) \in \mathbb{K}^{m \times n},
V^{\text{H}} is the conjugate transpose when V is complex, and the
transpose when V is real-valued. The matrices U, V (and thus
V^{\text{H}}) are orthogonal in the real case, and unitary in the
complex case.
When m > n (resp. m < n) we can drop the last m - n (resp. n
- m) columns of U (resp. V) to form the reduced SVD:
A = U \operatorname{diag}(S) V^{\text{H}} \mathrlap{\qquad U \in
| https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
\mathbb{K}^{m \times k}, S \in \mathbb{R}^k, V \in \mathbb{K}^{k
\times n}}
where \operatorname{diag}(S) \in \mathbb{K}^{k \times k}. In this
case, U and V also have orthonormal columns.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
The returned decomposition is a named tuple (U, S, Vh) which
corresponds to U, S, V^{\text{H}} above.
The singular values are returned in descending order.
The parameter "full_matrices" chooses between the full (default)
and reduced SVD.
The "driver" kwarg may be used in CUDA with a cuSOLVER backend to
choose the algorithm used to compute the SVD. The choice of a
driver is a trade-off between accuracy and speed.
If "A" is well-conditioned (its condition number is not too
large), or you do not mind some precision loss.
For a general matrix: 'gesvdj' (Jacobi method)
| https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
If "A" is tall or wide (m >> n or m << n): 'gesvda'
(Approximate method)
If "A" is not well-conditioned or precision is relevant:
'gesvd' (QR based)
By default ("driver"= None), we call 'gesvdj' and, if it fails,
we fallback to 'gesvd'.
Differences with numpy.linalg.svd:
Unlike numpy.linalg.svd, this function always returns a tuple
of three tensors and it doesn't support compute_uv argument.
Please use "torch.linalg.svdvals()", which computes only the
singular values, instead of compute_uv=False.
Note:
When "full_matrices"*= True*, the gradients with respect to
*U[..., :, min(m, n):]* and *Vh[..., min(m, n):, :]* will be
ignored, as those vectors can be arbitrary bases of the
corresponding subspaces.
Warning:
The returned tensors *U* and *V* are not unique, nor are they
continuous with respect to "A". Due to this lack of uniqueness,
| https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
different hardware and software may compute different singular
vectors.This non-uniqueness is caused by the fact that
multiplying any pair of singular vectors u_k, v_k by -1 in the
real case or by e^{i \phi}, \phi \in \mathbb{R} in the complex
case produces another two valid singular vectors of the matrix.
For this reason, the loss function shall not depend on this e^{i
\phi} quantity, as it is not well-defined. This is checked for
complex inputs when computing the gradients of this function. As
such, when inputs are complex and are on a CUDA device, the
computation of the gradients of this function synchronizes that
device with the CPU.
Warning:
Gradients computed using *U* or *Vh* will only be finite when "A"
does not have repeated singular values. If "A" is rectangular,
additionally, zero must also not be one of its singular values.
Furthermore, if the distance between any two singular values is
| https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
close to zero, the gradient will be numerically unstable, as it
depends on the singular values \sigma_i through the computation
of \frac{1}{\min_{i \neq j} \sigma_i^2 - \sigma_j^2}. In the
rectangular case, the gradient will also be numerically unstable
when "A" has small singular values, as it also depends on the
computation of \frac{1}{\sigma_i}.
See also:
"torch.linalg.svdvals()" computes only the singular values.
Unlike "torch.linalg.svd()", the gradients of "svdvals()" are
always numerically stable.
"torch.linalg.eig()" for a function that computes another type of
spectral decomposition of a matrix. The eigendecomposition works
just on square matrices.
"torch.linalg.eigh()" for a (faster) function that computes the
eigenvalue decomposition for Hermitian and symmetric matrices.
"torch.linalg.qr()" for another (much faster) decomposition that
works on general matrices.
Parameters: | https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
works on general matrices.
Parameters:
* A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
* **full_matrices** (*bool**, **optional*) -- controls whether
to compute the full or reduced SVD, and consequently, the
shape of the returned tensors *U* and *Vh*. Default: *True*.
Keyword Arguments:
* driver (str, optional) -- name of the cuSOLVER
method to be used. This keyword argument only works on CUDA
inputs. Available options are: None, gesvd, gesvdj, and
gesvda. Default: None.
* **out** (*tuple**, **optional*) -- output tuple of three
tensors. Ignored if *None*.
Returns:
A named tuple (U, S, Vh) which corresponds to U, S,
V^{\text{H}} above.
*S* will always be real-valued, even when "A" is complex. It
will also be ordered in descending order.
*U* and *Vh* will have the same dtype as "A". The left / right
| https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
singular vectors will be given by the columns of U and the
rows of Vh respectively.
Examples:
>>> A = torch.randn(5, 3)
>>> U, S, Vh = torch.linalg.svd(A, full_matrices=False)
>>> U.shape, S.shape, Vh.shape
(torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(A, U @ torch.diag(S) @ Vh)
tensor(1.0486e-06)
>>> U, S, Vh = torch.linalg.svd(A)
>>> U.shape, S.shape, Vh.shape
(torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3]))
>>> torch.dist(A, U[:, :3] @ torch.diag(S) @ Vh)
tensor(1.0486e-06)
>>> A = torch.randn(7, 5, 3)
>>> U, S, Vh = torch.linalg.svd(A, full_matrices=False)
>>> torch.dist(A, U @ torch.diag_embed(S) @ Vh)
tensor(3.0957e-06)
| https://pytorch.org/docs/stable/generated/torch.linalg.svd.html | pytorch docs |
torch.Tensor.record_stream
Tensor.record_stream(stream)
Ensures that the tensor memory is not reused for another tensor
until all current work queued on "stream" are complete.
Note:
The caching allocator is aware of only the stream where a tensor
was allocated. Due to the awareness, it already correctly manages
the life cycle of tensors on only one stream. But if a tensor is
used on a stream different from the stream of origin, the
allocator might reuse the memory unexpectedly. Calling this
method lets the allocator know which streams have used the
tensor.
| https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html | pytorch docs |
torch.Tensor.squeeze_
Tensor.squeeze_(dim=None) -> Tensor
In-place version of "squeeze()" | https://pytorch.org/docs/stable/generated/torch.Tensor.squeeze_.html | pytorch docs |
LazyBatchNorm3d
class torch.nn.LazyBatchNorm3d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
A "torch.nn.BatchNorm3d" module with lazy initialization of the
"num_features" argument of the "BatchNorm3d" that is inferred from
the "input.size(1)". The attributes that will be lazily initialized
are weight, bias, running_mean and running_var.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* eps (float) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Can be set to "None" for
cumulative moving average (i.e. simple average). Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters. Default:
"True"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm3d.html | pytorch docs |
"True"
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics, and initializes statistics buffers
"running_mean" and "running_var" as "None". When these buffers
are "None", this module always uses batch statistics. in both
training and eval modes. Default: "True"
cls_to_become
alias of "BatchNorm3d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm3d.html | pytorch docs |
torch.foreach_reciprocal
torch.foreach_reciprocal(self: List[Tensor]) -> None
Apply "torch.reciprocal()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_reciprocal_.html | pytorch docs |
torch.Tensor.tan
Tensor.tan() -> Tensor
See "torch.tan()" | https://pytorch.org/docs/stable/generated/torch.Tensor.tan.html | pytorch docs |
torch.Tensor.pinverse
Tensor.pinverse() -> Tensor
See "torch.pinverse()" | https://pytorch.org/docs/stable/generated/torch.Tensor.pinverse.html | pytorch docs |
torch.Tensor.is_contiguous
Tensor.is_contiguous(memory_format=torch.contiguous_format) -> bool
Returns True if "self" tensor is contiguous in memory in the order
specified by memory format.
Parameters:
memory_format ("torch.memory_format", optional) -- Specifies
memory allocation order. Default: "torch.contiguous_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.is_contiguous.html | pytorch docs |
torch.Tensor.q_scale
Tensor.q_scale() -> float
Given a Tensor quantized by linear(affine) quantization, returns
the scale of the underlying quantizer(). | https://pytorch.org/docs/stable/generated/torch.Tensor.q_scale.html | pytorch docs |
torch.get_deterministic_debug_mode
torch.get_deterministic_debug_mode()
Returns the current value of the debug mode for deterministic
operations. Refer to "torch.set_deterministic_debug_mode()"
documentation for more details.
Return type:
int | https://pytorch.org/docs/stable/generated/torch.get_deterministic_debug_mode.html | pytorch docs |
MultiLabelSoftMarginLoss
class torch.nn.MultiLabelSoftMarginLoss(weight=None, size_average=None, reduce=None, reduction='mean')
Creates a criterion that optimizes a multi-label one-versus-all
loss based on max-entropy, between input x and target y of size (N,
C). For each sample in the minibatch:
loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 +
\exp(-x[i]))^{-1}) + (1-y[i]) *
\log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right)
where i \in \left{0, \; \cdots , \; \text{x.nElement}() -
1\right}, y[i] \in \left{0, \; 1\right}.
Parameters:
* weight (Tensor, optional) -- a manual rescaling
weight given to each class. If given, it has to be a Tensor of
size C. Otherwise, it is treated as if having all ones.
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
| https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html | pytorch docs |
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
| https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html | pytorch docs |
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (N, C) where N is the batch size and C is the
number of classes.
* Target: (N, C), label targets padded by -1 ensuring same shape
as the input.
* Output: scalar. If "reduction" is "'none'", then (N).
| https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html | pytorch docs |
torch.nn.functional.fold
torch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)
Combines an array of sliding local blocks into a large containing
tensor.
Warning:
Currently, only unbatched (3D) or batched (4D) image-like output
tensors are supported.
See "torch.nn.Fold" for details
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.fold.html | pytorch docs |
torch.autograd.graph.Node.register_hook
abstract Node.register_hook(fn)
Registers a backward hook.
The hook will be called every time a gradient with respect to the
Node is computed. The hook should have the following signature:
hook(grad_inputs: Tuple[Tensor], grad_outputs: Tuple[Tensor]) -> Tuple[Tensor] or None
The hook should not modify its argument, but it can optionally
return a new gradient which will be used in place of
"grad_outputs".
This function returns a handle with a method "handle.remove()" that
removes the hook from the module.
Note:
See Backward Hooks execution for more information on how when
this hook is executed, and how its execution is ordered relative
to other hooks.
Example:
>>> import torch
>>> a = torch.tensor([0., 0., 0.], requires_grad=True)
>>> b = a.clone()
>>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)
| https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_hook.html | pytorch docs |
handle = b.grad_fn.register_hook(lambda gI, gO: (gO[0] * 2,))
>>> b.sum().backward(retain_graph=True)
>>> print(a.grad)
tensor([2., 2., 2.])
>>> handle.remove() # Removes the hook
>>> a.grad = None
>>> b.sum().backward(retain_graph=True)
>>> print(a.grad)
tensor([1., 1., 1.])
Return type:
RemovableHandle | https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_hook.html | pytorch docs |
torch.arccos
torch.arccos(input, *, out=None) -> Tensor
Alias for "torch.acos()". | https://pytorch.org/docs/stable/generated/torch.arccos.html | pytorch docs |
torch.histc
torch.histc(input, bins=100, min=0, max=0, *, out=None) -> Tensor
Computes the histogram of a tensor.
The elements are sorted into equal width bins between "min" and
"max". If "min" and "max" are both zero, the minimum and maximum
values of the data are used.
Elements lower than min and higher than max and "NaN" elements are
ignored.
Parameters:
* input (Tensor) -- the input tensor.
* **bins** (*int*) -- number of histogram bins
* **min** (*Scalar*) -- lower end of the range (inclusive)
* **max** (*Scalar*) -- upper end of the range (inclusive)
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Returns:
Histogram represented as a tensor
Return type:
Tensor
Example:
>>> torch.histc(torch.tensor([1., 2, 1]), bins=4, min=0, max=3)
tensor([ 0., 2., 1., 0.])
| https://pytorch.org/docs/stable/generated/torch.histc.html | pytorch docs |
torch.Tensor.float_power
Tensor.float_power(exponent) -> Tensor
See "torch.float_power()" | https://pytorch.org/docs/stable/generated/torch.Tensor.float_power.html | pytorch docs |
torch.linalg.tensorsolve
torch.linalg.tensorsolve(A, B, dims=None, *, out=None) -> Tensor
Computes the solution X to the system torch.tensordot(A, X) =
B.
If m is the product of the first "B".ndim dimensions of "A"
and n is the product of the rest of the dimensions, this function
expects m and n to be equal.
The returned tensor x satisfies tensordot("A", x, dims=x.ndim)
== "B". x has shape "A"[B.ndim:].
If "dims" is specified, "A" will be reshaped as
A = movedim(A, dims, range(len(dims) - A.ndim + 1, 0))
Supports inputs of float, double, cfloat and cdouble dtypes.
See also:
"torch.linalg.tensorinv()" computes the multiplicative inverse of
"torch.tensordot()".
Parameters:
* A (Tensor) -- tensor to solve for. Its shape must
satisfy prod("A".shape[:"B".ndim]) ==
prod("A".shape["B".ndim:]).
* **B** (*Tensor*) -- tensor of shape "A"*.shape[:*"B"*.ndim]*.
| https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html | pytorch docs |
dims (Tuple[int], optional) -- dimensions of
"A" to be moved. If None, no dimensions are moved. Default:
None.
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Raises:
RuntimeError -- if the reshaped "A".view(m, m) with m as
above is not invertible or the product of the first "ind"
dimensions is not equal to the product of the rest of the
dimensions.
Examples:
>>> A = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4))
>>> B = torch.randn(2 * 3, 4)
>>> X = torch.linalg.tensorsolve(A, B)
>>> X.shape
torch.Size([2, 3, 4])
>>> torch.allclose(torch.tensordot(A, X, dims=X.ndim), B)
True
>>> A = torch.randn(6, 4, 4, 3, 2)
>>> B = torch.randn(4, 3, 2)
>>> X = torch.linalg.tensorsolve(A, B, dims=(0, 2))
>>> X.shape
torch.Size([6, 4])
>>> A = A.permute(1, 3, 4, 0, 2)
| https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html | pytorch docs |
A = A.permute(1, 3, 4, 0, 2)
>>> A.shape[B.ndim:]
torch.Size([6, 4])
>>> torch.allclose(torch.tensordot(A, X, dims=X.ndim), B, atol=1e-6)
True
| https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html | pytorch docs |
torch.Tensor.new_full
Tensor.new_full(size, fill_value, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor
Returns a Tensor of size "size" filled with "fill_value". By
default, the returned Tensor has the same "torch.dtype" and
"torch.device" as this tensor.
Parameters:
fill_value (scalar) -- the number to fill the output
tensor with.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired type of
returned tensor. Default: if None, same "torch.dtype" as this
tensor.
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, same "torch.device" as this
tensor.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **layout** ("torch.layout", optional) -- the desired layout of
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_full.html | pytorch docs |
returned Tensor. Default: "torch.strided".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> tensor = torch.ones((2,), dtype=torch.float64)
>>> tensor.new_full((3, 4), 3.141592)
tensor([[ 3.1416, 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416, 3.1416],
[ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_full.html | pytorch docs |
torch.Tensor.pow
Tensor.pow(exponent) -> Tensor
See "torch.pow()" | https://pytorch.org/docs/stable/generated/torch.Tensor.pow.html | pytorch docs |
torch.Tensor.int_repr
Tensor.int_repr() -> Tensor
Given a quantized Tensor, "self.int_repr()" returns a CPU Tensor
with uint8_t as data type that stores the underlying uint8_t values
of the given Tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.int_repr.html | pytorch docs |
torch.Tensor.addcmul_
Tensor.addcmul_(tensor1, tensor2, *, value=1) -> Tensor
In-place version of "addcmul()" | https://pytorch.org/docs/stable/generated/torch.Tensor.addcmul_.html | pytorch docs |
torch.sspaddmm
torch.sspaddmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) -> Tensor
Matrix multiplies a sparse tensor "mat1" with a dense tensor
"mat2", then adds the sparse tensor "input" to the result.
Note: This function is equivalent to "torch.addmm()", except
"input" and "mat1" are sparse.
Parameters:
* input (Tensor) -- a sparse matrix to be added
* **mat1** (*Tensor*) -- a sparse matrix to be matrix multiplied
* **mat2** (*Tensor*) -- a dense matrix to be matrix multiplied
Keyword Arguments:
* beta (Number, optional) -- multiplier for "mat"
(\beta)
* **alpha** (*Number**, **optional*) -- multiplier for mat1 @
mat2 (\alpha)
* **out** (*Tensor**, **optional*) -- the output tensor.
| https://pytorch.org/docs/stable/generated/torch.sspaddmm.html | pytorch docs |
torch.Tensor.arctan_
Tensor.arctan_() -> Tensor
In-place version of "arctan()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arctan_.html | pytorch docs |
torch.Tensor.digamma_
Tensor.digamma_() -> Tensor
In-place version of "digamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.digamma_.html | pytorch docs |
ParameterList
class torch.nn.ParameterList(values=None)
Holds parameters in a list.
"ParameterList" can be used like a regular Python list, but Tensors
that are "Parameter" are properly registered, and will be visible
by all "Module" methods.
Note that the constructor, assigning an element of the list, the
"append()" method and the "extend()" method will convert any
"Tensor" into "Parameter".
Parameters:
parameters (iterable, optional) -- an iterable of
elements to add to the list.
Example:
class MyModule(nn.Module):
def __init__(self):
super(MyModule, self).__init__()
self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)])
def forward(self, x):
# ParameterList can act as an iterable, or be indexed using ints
for i, p in enumerate(self.params):
x = self.params[i // 2].mm(x) + p.mm(x)
return x
| https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html | pytorch docs |
return x
append(value)
Appends a given value at the end of the list.
Parameters:
**value** (*Any*) -- value to append
Return type:
*ParameterList*
extend(values)
Appends values from a Python iterable to the end of the list.
Parameters:
**values** (*iterable*) -- iterable of values to append
Return type:
*ParameterList*
| https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html | pytorch docs |
torch.sinh
torch.sinh(input, *, out=None) -> Tensor
Returns a new tensor with the hyperbolic sine of the elements of
"input".
\text{out}_{i} = \sinh(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.5380, -0.8632, -0.1265, 0.9399])
>>> torch.sinh(a)
tensor([ 0.5644, -0.9744, -0.1268, 1.0845])
Note:
When "input" is on the CPU, the implementation of torch.sinh may
use the Sleef library, which rounds very large results to
infinity or negative infinity. See here for details.
| https://pytorch.org/docs/stable/generated/torch.sinh.html | pytorch docs |
inference_mode
class torch.inference_mode(mode=True)
Context-manager that enables or disables inference mode
InferenceMode is a new context manager analogous to "no_grad" to be
used when you are certain your operations will have no interactions
with autograd (e.g., model training). Code run under this mode gets
better performance by disabling view tracking and version counter
bumps. Note that unlike some other mechanisms that locally enable
or disable grad, entering inference_mode also disables to forward-
mode AD.
This context manager is thread local; it will not affect
computation in other threads.
Also functions as a decorator. (Make sure to instantiate with
parenthesis.)
Note:
Inference mode is one of several mechanisms that can enable or
disable gradients locally see Locally disabling gradient
computation for more information on how they compare.
Parameters: | https://pytorch.org/docs/stable/generated/torch.inference_mode.html | pytorch docs |
Parameters:
mode (bool) -- Flag whether to enable or disable inference
mode
Example::
>>> import torch
>>> x = torch.ones(1, 2, 3, requires_grad=True)
>>> with torch.inference_mode():
... y = x * x
>>> y.requires_grad
False
>>> y._version
Traceback (most recent call last):
File "", line 1, in
RuntimeError: Inference tensors do not track version counter.
>>> @torch.inference_mode()
... def func(x):
... return x * x
>>> out = func(x)
>>> out.requires_grad
False | https://pytorch.org/docs/stable/generated/torch.inference_mode.html | pytorch docs |
torch.Tensor.arccos_
Tensor.arccos_() -> Tensor
In-place version of "arccos()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arccos_.html | pytorch docs |
torch.Tensor.addmv
Tensor.addmv(mat, vec, *, beta=1, alpha=1) -> Tensor
See "torch.addmv()" | https://pytorch.org/docs/stable/generated/torch.Tensor.addmv.html | pytorch docs |
torch.Tensor.less_
Tensor.less_(other) -> Tensor
In-place version of "less()". | https://pytorch.org/docs/stable/generated/torch.Tensor.less_.html | pytorch docs |
torch.foreach_ceil
torch.foreach_ceil(self: List[Tensor]) -> None
Apply "torch.ceil()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_ceil_.html | pytorch docs |
convert
class torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, is_reference=False, convert_custom_config_dict=None)
Converts submodules in input module to a different module according
to mapping by calling from_float method on the target module
class. And remove qconfig at the end if remove_qconfig is set to
True.
Parameters:
* module -- prepared and calibrated module
* **mapping** -- a dictionary that maps from source module type
to target module type, can be overwritten to allow swapping
user defined Modules
* **inplace** -- carry out model transformations in-place, the
original module is mutated
* **convert_custom_config_dict** -- custom configuration
dictionary for convert function
# Example of convert_custom_config_dict:
convert_custom_config_dict = {
# user will manually define the corresponding quantized
| https://pytorch.org/docs/stable/generated/torch.quantization.convert.html | pytorch docs |
module class which has a from_observed class method that converts
# observed custom module to quantized custom module
"observed_to_quantized_custom_module_class": {
ObservedCustomModule: QuantizedCustomModule
}
}
| https://pytorch.org/docs/stable/generated/torch.quantization.convert.html | pytorch docs |
torch.Tensor.divide_
Tensor.divide_(value, *, rounding_mode=None) -> Tensor
In-place version of "divide()" | https://pytorch.org/docs/stable/generated/torch.Tensor.divide_.html | pytorch docs |
graph
class torch.cuda.graph(cuda_graph, pool=None, stream=None)
Context-manager that captures CUDA work into a
"torch.cuda.CUDAGraph" object for later replay.
See CUDA Graphs for a general introduction, detailed use, and
constraints.
Parameters:
* cuda_graph (torch.cuda.CUDAGraph) -- Graph object used
for capture.
* **pool** (*optional*) -- Opaque token (returned by a call to
"graph_pool_handle()" or "other_Graph_instance.pool()")
hinting this graph's capture may share memory from the
specified pool. See Graph memory management.
* **stream** (*torch.cuda.Stream**, **optional*) -- If supplied,
will be set as the current stream in the context. If not
supplied, "graph" sets its own internal side stream as the
current stream in the context.
Note:
For effective memory sharing, if you pass a "pool" used by a
previous capture and the previous capture used an explicit
| https://pytorch.org/docs/stable/generated/torch.cuda.graph.html | pytorch docs |
"stream" argument, you should pass the same "stream" argument to
this capture.
Warning:
This API is in beta and may change in future releases.
| https://pytorch.org/docs/stable/generated/torch.cuda.graph.html | pytorch docs |
torch.jit.load
torch.jit.load(f, map_location=None, _extra_files=None)
Load a "ScriptModule" or "ScriptFunction" previously saved with
"torch.jit.save"
All previously saved modules, no matter their device, are first
loaded onto CPU, and then are moved to the devices they were saved
from. If this fails (e.g. because the run time system doesn't have
certain devices), an exception is raised.
Parameters:
* f -- a file-like object (has to implement read, readline,
tell, and seek), or a string containing a file name
* **map_location** (*string** or **torch.device*) -- A
simplified version of "map_location" in *torch.jit.save* used
to dynamically remap storages to an alternative set of
devices.
* **_extra_files** (*dictionary of filename to content*) -- The
extra filenames given in the map would be loaded and their
content would be stored in the provided map.
Returns:
A "ScriptModule" object. | https://pytorch.org/docs/stable/generated/torch.jit.load.html | pytorch docs |
Returns:
A "ScriptModule" object.
Example:
import torch
import io
torch.jit.load('scriptmodule.pt')
# Load ScriptModule from io.BytesIO object
with open('scriptmodule.pt', 'rb') as f:
buffer = io.BytesIO(f.read())
# Load all tensors to the original device
torch.jit.load(buffer)
# Load all tensors onto CPU, using a device
buffer.seek(0)
torch.jit.load(buffer, map_location=torch.device('cpu'))
# Load all tensors onto CPU, using a string
buffer.seek(0)
torch.jit.load(buffer, map_location='cpu')
# Load with extra files.
extra_files = {'foo.txt': ''} # values will be replaced with data
torch.jit.load('scriptmodule.pt', _extra_files=extra_files)
print(extra_files['foo.txt'])
| https://pytorch.org/docs/stable/generated/torch.jit.load.html | pytorch docs |
torch.Tensor.quantile
Tensor.quantile(q, dim=None, keepdim=False, *, interpolation='linear') -> Tensor
See "torch.quantile()" | https://pytorch.org/docs/stable/generated/torch.Tensor.quantile.html | pytorch docs |
torch.complex
torch.complex(real, imag, *, out=None) -> Tensor
Constructs a complex tensor with its real part equal to "real" and
its imaginary part equal to "imag".
Parameters:
* real (Tensor) -- The real part of the complex tensor.
Must be float or double.
* **imag** (*Tensor*) -- The imaginary part of the complex
tensor. Must be same dtype as "real".
Keyword Arguments:
out (Tensor) -- If the inputs are "torch.float32", must be
"torch.complex64". If the inputs are "torch.float64", must be
"torch.complex128".
Example:
>>> real = torch.tensor([1, 2], dtype=torch.float32)
>>> imag = torch.tensor([3, 4], dtype=torch.float32)
>>> z = torch.complex(real, imag)
>>> z
tensor([(1.+3.j), (2.+4.j)])
>>> z.dtype
torch.complex64
| https://pytorch.org/docs/stable/generated/torch.complex.html | pytorch docs |
torch._foreach_neg
torch._foreach_neg(self: List[Tensor]) -> List[Tensor]
Apply "torch.neg()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_neg.html | pytorch docs |
torch.lcm
torch.lcm(input, other, *, out=None) -> Tensor
Computes the element-wise least common multiple (LCM) of "input"
and "other".
Both "input" and "other" must have integer types.
Note:
This defines lcm(0, 0) = 0 and lcm(0, a) = 0.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([5, 10, 15])
>>> b = torch.tensor([3, 4, 5])
>>> torch.lcm(a, b)
tensor([15, 20, 15])
>>> c = torch.tensor([3])
>>> torch.lcm(a, c)
tensor([15, 30, 15])
| https://pytorch.org/docs/stable/generated/torch.lcm.html | pytorch docs |
torch._foreach_asin
torch._foreach_asin(self: List[Tensor]) -> List[Tensor]
Apply "torch.asin()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_asin.html | pytorch docs |
torch.isposinf
torch.isposinf(input, *, out=None) -> Tensor
Tests if each element of "input" is positive infinity or not.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([-float('inf'), float('inf'), 1.2])
>>> torch.isposinf(a)
tensor([False, True, False])
| https://pytorch.org/docs/stable/generated/torch.isposinf.html | pytorch docs |
ConvBn1d
class torch.ao.nn.intrinsic.qat.ConvBn1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)
A ConvBn1d module is a module fused from Conv1d and BatchNorm1d,
attached with FakeQuantize modules for weight, used in quantization
aware training.
We combined the interface of "torch.nn.Conv1d" and
"torch.nn.BatchNorm1d".
Similar to "torch.nn.Conv1d", with FakeQuantize modules initialized
to default.
Variables:
* freeze_bn --
* **weight_fake_quant** -- fake quant module for weight
| https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn1d.html | pytorch docs |
enable_fake_quant
class torch.quantization.fake_quantize.enable_fake_quant(mod)
Enable fake quantization for this module, if applicable. Example
usage:
# model is any PyTorch model
model.apply(torch.ao.quantization.enable_fake_quant)
| https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.enable_fake_quant.html | pytorch docs |
RNN
class torch.nn.RNN(args, *kwargs)
Applies a multi-layer Elman RNN with \tanh or \text{ReLU} non-
linearity to an input sequence.
For each element in the input sequence, each layer computes the
following function:
h_t = \tanh(x_t W_{ih}^T + b_{ih} + h_{t-1}W_{hh}^T + b_{hh})
where h_t is the hidden state at time t, x_t is the input at time
t, and h_{(t-1)} is the hidden state of the previous layer at
time t-1 or the initial hidden state at time 0. If
"nonlinearity" is "'relu'", then \text{ReLU} is used instead of
\tanh.
Parameters:
* input_size -- The number of expected features in the input
x
* **hidden_size** -- The number of features in the hidden state
*h*
* **num_layers** -- Number of recurrent layers. E.g., setting
"num_layers=2" would mean stacking two RNNs together to form a
*stacked RNN*, with the second RNN taking in outputs of the
| https://pytorch.org/docs/stable/generated/torch.nn.RNN.html | pytorch docs |
first RNN and computing the final results. Default: 1
* **nonlinearity** -- The non-linearity to use. Can be either
"'tanh'" or "'relu'". Default: "'tanh'"
* **bias** -- If "False", then the layer does not use bias
weights *b_ih* and *b_hh*. Default: "True"
* **batch_first** -- If "True", then the input and output
tensors are provided as *(batch, seq, feature)* instead of
*(seq, batch, feature)*. Note that this does not apply to
hidden or cell states. See the Inputs/Outputs sections below
for details. Default: "False"
* **dropout** -- If non-zero, introduces a *Dropout* layer on
the outputs of each RNN layer except the last layer, with
dropout probability equal to "dropout". Default: 0
* **bidirectional** -- If "True", becomes a bidirectional RNN.
Default: "False"
Inputs: input, h_0
* input: tensor of shape (L, H_{in}) for unbatched input, | https://pytorch.org/docs/stable/generated/torch.nn.RNN.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.