text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
For a general matrix: 'gelsy' (QR with pivoting) (default)
If "A" is full-rank: 'gels' (QR)
If "A" is not well-conditioned.
'gelsd' (tridiagonal reduction and SVD)
But if you run into memory issues: 'gelss' (full SVD).
For CUDA input, the only valid driver is 'gels', which assumes
that "A" is full-rank.
See also the full description of these drivers
"rcond" is used to determine the effective rank of the matrices in
"A" when "driver" is one of ('gelsy', 'gelsd', 'gelss'). In
this case, if \sigma_i are the singular values of A in decreasing
order, \sigma_i will be rounded down to zero if \sigma_i \leq
\text{rcond} \cdot \sigma_1. If "rcond"= None (default), "rcond"
is set to the machine precision of the dtype of "A" times max(m,
n).
This function returns the solution to the problem and some extra
information in a named tuple of four tensors *(solution, residuals, | https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html | pytorch docs |
rank, singular_values). For inputs "A", "B" of shape (, m, n),
(, m, k)* respectively, it contains
solution: the least squares solution. It has shape (, n, k)*.
residuals: the squared residuals of the solutions, that is,
|AX - B|_F^2. It has shape equal to the batch dimensions of
"A". It is computed when m > n and every matrix in "A" is full-
rank, otherwise, it is an empty tensor. If "A" is a batch of
matrices and any matrix in the batch is not full rank, then an
empty tensor is returned. This behavior may change in a future
PyTorch release.
rank: tensor of ranks of the matrices in "A". It has shape
equal to the batch dimensions of "A". It is computed when
"driver" is one of ('gelsy', 'gelsd', 'gelss'), otherwise
it is an empty tensor.
singular_values: tensor of singular values of the matrices in
"A". It has shape (, min(m, n))*. It is computed when "driver"
| https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html | pytorch docs |
is one of ('gelsd', 'gelss'), otherwise it is an empty
tensor.
Note:
This function computes *X = *"A"*.pinverse() @ *"B" in a faster
and more numerically stable way than performing the computations
separately.
Warning:
The default value of "rcond" may change in a future PyTorch
release. It is therefore recommended to use a fixed value to
avoid potential breaking changes.
Parameters:
* A (Tensor) -- lhs tensor of shape (, m, n)* where ***
is zero or more batch dimensions.
* **B** (*Tensor*) -- rhs tensor of shape *(*, m, k)* where ***
is zero or more batch dimensions.
* **rcond** (*float**, **optional*) -- used to determine the
effective rank of "A". If "rcond"*= None*, "rcond" is set to
the machine precision of the dtype of "A" times *max(m, n)*.
Default: *None*.
Keyword Arguments:
driver (str, optional) -- name of the LAPACK/MAGMA | https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html | pytorch docs |
method to be used. If None, 'gelsy' is used for CPU inputs
and 'gels' for CUDA inputs. Default: None.
Returns:
A named tuple (solution, residuals, rank, singular_values).
Examples:
>>> A = torch.randn(1,3,3)
>>> A
tensor([[[-1.0838, 0.0225, 0.2275],
[ 0.2438, 0.3844, 0.5499],
[ 0.1175, -0.9102, 2.0870]]])
>>> B = torch.randn(2,3,3)
>>> B
tensor([[[-0.6772, 0.7758, 0.5109],
[-1.4382, 1.3769, 1.1818],
[-0.3450, 0.0806, 0.3967]],
[[-1.3994, -0.1521, -0.1473],
[ 1.9194, 1.0458, 0.6705],
[-1.1802, -0.9796, 1.4086]]])
>>> X = torch.linalg.lstsq(A, B).solution # A is broadcasted to shape (2, 3, 3)
>>> torch.dist(X, torch.linalg.pinv(A) @ B)
tensor(1.5152e-06)
>>> S = torch.linalg.lstsq(A, B, driver='gelsd').singular_values
>>> torch.dist(S, torch.linalg.svdvals(A))
tensor(2.3842e-07)
| https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html | pytorch docs |
tensor(2.3842e-07)
>>> A[:, 0].zero_() # Decrease the rank of A
>>> rank = torch.linalg.lstsq(A, B).rank
>>> rank
tensor([2])
| https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html | pytorch docs |
LinearLR
class torch.optim.lr_scheduler.LinearLR(optimizer, start_factor=0.3333333333333333, end_factor=1.0, total_iters=5, last_epoch=- 1, verbose=False)
Decays the learning rate of each parameter group by linearly
changing small multiplicative factor until the number of epoch
reaches a pre-defined milestone: total_iters. Notice that such
decay can happen simultaneously with other changes to the learning
rate from outside this scheduler. When last_epoch=-1, sets initial
lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **start_factor** (*float*) -- The number we multiply learning
rate in the first epoch. The multiplication factor changes
towards end_factor in the following epochs. Default: 1./3.
* **end_factor** (*float*) -- The number we multiply learning
rate at the end of linear changing process. Default: 1.0.
* **total_iters** (*int*) -- The number of iterations that
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html | pytorch docs |
multiplicative factor reaches to 1. Default: 5.
* **last_epoch** (*int*) -- The index of the last epoch.
Default: -1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
Assuming optimizer uses lr = 0.05 for all groups
lr = 0.025 if epoch == 0
lr = 0.03125 if epoch == 1
lr = 0.0375 if epoch == 2
lr = 0.04375 if epoch == 3
lr = 0.05 if epoch >= 4
scheduler = LinearLR(self.opt, start_factor=0.5, total_iters=4)
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html | pytorch docs |
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html | pytorch docs |
torch.sigmoid
torch.sigmoid(input, *, out=None) -> Tensor
Alias for "torch.special.expit()". | https://pytorch.org/docs/stable/generated/torch.sigmoid.html | pytorch docs |
LazyBatchNorm2d
class torch.nn.LazyBatchNorm2d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
A "torch.nn.BatchNorm2d" module with lazy initialization of the
"num_features" argument of the "BatchNorm2d" that is inferred from
the "input.size(1)". The attributes that will be lazily initialized
are weight, bias, running_mean and running_var.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* eps (float) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Can be set to "None" for
cumulative moving average (i.e. simple average). Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters. Default:
"True"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm2d.html | pytorch docs |
"True"
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics, and initializes statistics buffers
"running_mean" and "running_var" as "None". When these buffers
are "None", this module always uses batch statistics. in both
training and eval modes. Default: "True"
cls_to_become
alias of "BatchNorm2d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm2d.html | pytorch docs |
torch.copysign
torch.copysign(input, other, *, out=None) -> Tensor
Create a new floating-point tensor with the magnitude of "input"
and the sign of "other", elementwise.
\text{out}_{i} = \begin{cases} -|\text{input}_{i}| &
\text{if } \text{other}_{i} \leq -0.0 \\ |\text{input}_{i}|
& \text{if } \text{other}_{i} \geq 0.0 \\ \end{cases}
Supports broadcasting to a common shape, and integer and float
inputs.
Parameters:
* input (Tensor) -- magnitudes.
* **other** (*Tensor** or **Number*) -- contains value(s) whose
signbit(s) are applied to the magnitudes in "input".
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(5)
>>> a
tensor([-1.2557, -0.0026, -0.5387, 0.4740, -0.9244])
>>> torch.copysign(a, 1)
tensor([1.2557, 0.0026, 0.5387, 0.4740, 0.9244])
>>> a = torch.randn(4, 4)
>>> a
| https://pytorch.org/docs/stable/generated/torch.copysign.html | pytorch docs |
a = torch.randn(4, 4)
>>> a
tensor([[ 0.7079, 0.2778, -1.0249, 0.5719],
[-0.0059, -0.2600, -0.4475, -1.3948],
[ 0.3667, -0.9567, -2.5757, -0.1751],
[ 0.2046, -0.0742, 0.2998, -0.1054]])
>>> b = torch.randn(4)
tensor([ 0.2373, 0.3120, 0.3190, -1.1128])
>>> torch.copysign(a, b)
tensor([[ 0.7079, 0.2778, 1.0249, -0.5719],
[ 0.0059, 0.2600, 0.4475, -1.3948],
[ 0.3667, 0.9567, 2.5757, -0.1751],
[ 0.2046, 0.0742, 0.2998, -0.1054]])
>>> a = torch.tensor([1.])
>>> b = torch.tensor([-0.])
>>> torch.copysign(a, b)
tensor([-1.])
Note:
copysign handles signed zeros. If the other argument has a
negative zero (-0), the corresponding output value will be
negative.
| https://pytorch.org/docs/stable/generated/torch.copysign.html | pytorch docs |
torch.Tensor.histc
Tensor.histc(bins=100, min=0, max=0) -> Tensor
See "torch.histc()" | https://pytorch.org/docs/stable/generated/torch.Tensor.histc.html | pytorch docs |
torch.pca_lowrank
torch.pca_lowrank(A, q=None, center=True, niter=2)
Performs linear Principal Component Analysis (PCA) on a low-rank
matrix, batches of such matrices, or sparse matrix.
This function returns a namedtuple "(U, S, V)" which is the nearly
optimal approximation of a singular value decomposition of a
centered matrix A such that A = U diag(S) V^T.
Note:
The relation of "(U, S, V)" to PCA is as follows:
* A is a data matrix with "m" samples and "n" features
* the V columns represent the principal directions
* S ** 2 / (m - 1) contains the eigenvalues of A^T A / (m - 1)
which is the covariance of "A" when "center=True" is provided.
* "matmul(A, V[:, :k])" projects data to the first k principal
components
Note:
Different from the standard SVD, the size of returned matrices
depend on the specified rank and q values as follows:
* U is m x q matrix
* S is q-vector
| https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html | pytorch docs |
S is q-vector* V is n x q matrix
Note:
To obtain repeatable results, reset the seed for the pseudorandom
number generator
Parameters:
* A (Tensor) -- the input tensor of size (*, m, n)
* **q** (*int**, **optional*) -- a slightly overestimated rank
of A. By default, "q = min(6, m, n)".
* **center** (*bool**, **optional*) -- if True, center the input
tensor, otherwise, assume that the input is centered.
* **niter** (*int**, **optional*) -- the number of subspace
iterations to conduct; niter must be a nonnegative integer,
and defaults to 2.
Return type:
Tuple[Tensor, Tensor, Tensor]
References:
- Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding
structure with randomness: probabilistic algorithms for
constructing approximate matrix decompositions,
arXiv:0909.4061 [math.NA; math.PR], 2009 (available at
| https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html | pytorch docs |
arXiv <http://arxiv.org/abs/0909.4061>_). | https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html | pytorch docs |
torch.unique
torch.unique(input, sorted=True, return_inverse=False, return_counts=False, dim=None) -> Tuple[Tensor, Tensor, Tensor]
Returns the unique elements of the input tensor.
Note:
This function is different from "torch.unique_consecutive()" in
the sense that this function also eliminates non-consecutive
duplicate values.
Note:
Currently in the CUDA implementation and the CPU implementation
when dim is specified, *torch.unique* always sort the tensor at
the beginning regardless of the *sort* argument. Sorting could be
slow, so if your input tensor is already sorted, it is
recommended to use "torch.unique_consecutive()" which avoids the
sorting.
Parameters:
* input (Tensor) -- the input tensor
* **sorted** (*bool*) -- Whether to sort the unique elements in
ascending order before returning as output.
* **return_inverse** (*bool*) -- Whether to also return the
| https://pytorch.org/docs/stable/generated/torch.unique.html | pytorch docs |
indices for where elements in the original input ended up in
the returned unique list.
* **return_counts** (*bool*) -- Whether to also return the
counts for each unique element.
* **dim** (*int*) -- the dimension to apply unique. If "None",
the unique of the flattened input is returned. default: "None"
Returns:
A tensor or a tuple of tensors containing
* **output** (*Tensor*): the output list of unique scalar
elements.
* **inverse_indices** (*Tensor*): (optional) if
"return_inverse" is True, there will be an additional
returned tensor (same shape as input) representing the
indices for where elements in the original input map to in
the output; otherwise, this function will only return a
single tensor.
* **counts** (*Tensor*): (optional) if "return_counts" is
True, there will be an additional returned tensor (same
| https://pytorch.org/docs/stable/generated/torch.unique.html | pytorch docs |
shape as output or output.size(dim), if dim was specified)
representing the number of occurrences for each unique
value or tensor.
Return type:
(Tensor, Tensor (optional), Tensor (optional))
Example:
>>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))
>>> output
tensor([1, 2, 3])
>>> output, inverse_indices = torch.unique(
... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([1, 2, 3])
>>> inverse_indices
tensor([0, 2, 1, 2])
>>> output, inverse_indices = torch.unique(
... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)
>>> output
tensor([1, 2, 3])
>>> inverse_indices
tensor([[0, 2],
[1, 2]])
| https://pytorch.org/docs/stable/generated/torch.unique.html | pytorch docs |
AdaptiveLogSoftmaxWithLoss
class torch.nn.AdaptiveLogSoftmaxWithLoss(in_features, n_classes, cutoffs, div_value=4.0, head_bias=False, device=None, dtype=None)
Efficient softmax approximation as described in Efficient softmax
approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha
Cissé, David Grangier, and Hervé Jégou.
Adaptive softmax is an approximate strategy for training models
with large output spaces. It is most effective when the label
distribution is highly imbalanced, for example in natural language
modelling, where the word frequency distribution approximately
follows the Zipf's law.
Adaptive softmax partitions the labels into several clusters,
according to their frequency. These clusters may contain different
number of targets each. Additionally, clusters containing less
frequent labels assign lower dimensional embeddings to those
labels, which speeds up the computation. For each minibatch, only | https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html | pytorch docs |
clusters for which at least one target is present are evaluated.
The idea is that the clusters which are accessed frequently (like
the first one, containing most frequent labels), should also be
cheap to compute -- that is, contain a small number of assigned
labels.
We highly recommend taking a look at the original paper for more
details.
"cutoffs" should be an ordered Sequence of integers sorted in the
increasing order. It controls number of clusters and the
partitioning of targets into clusters. For example setting
"cutoffs = [10, 100, 1000]" means that first 10 targets will be
assigned to the 'head' of the adaptive softmax, targets 11, 12,
..., 100 will be assigned to the first cluster, and targets
101, 102, ..., 1000 will be assigned to the second cluster,
while targets 1001, 1002, ..., n_classes - 1 will be assigned
to the last, third cluster.
"div_value" is used to compute the size of each additional
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html | pytorch docs |
cluster, which is given as \left\lfloor\frac{\texttt{in_feature
s}}{\texttt{div_value}^{idx}}\right\rfloor, where idx is the
cluster index (with clusters for less frequent words having
larger indices, and indices starting from 1).
"head_bias" if set to True, adds a bias term to the 'head' of the
adaptive softmax. See paper for details. Set to False in the
official implementation.
Warning:
Labels passed as inputs to this module should be sorted according
to their frequency. This means that the most frequent label
should be represented by the index *0*, and the least frequent
label should be represented by the index *n_classes - 1*.
Note:
This module returns a "NamedTuple" with "output" and "loss"
fields. See further documentation for details.
Note:
To compute log-probabilities for all classes, the "log_prob"
method can be used.
Parameters:
* in_features (int) -- Number of features in the input | https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html | pytorch docs |
tensor
* **n_classes** (*int*) -- Number of classes in the dataset
* **cutoffs** (*Sequence*) -- Cutoffs used to assign targets to
their buckets
* **div_value** (*float**, **optional*) -- value used as an
exponent to compute sizes of the clusters. Default: 4.0
* **head_bias** (*bool**, **optional*) -- If "True", adds a bias
term to the 'head' of the adaptive softmax. Default: "False"
Returns:
* output is a Tensor of size "N" containing computed target
log probabilities for each example
* **loss** is a Scalar representing the computed negative log
likelihood loss
Return type:
"NamedTuple" with "output" and "loss" fields
Shape:
* input: (N, \texttt{in_features}) or (\texttt{in_features})
* target: (N) or () where each value satisfies 0 <=
\texttt{target[i]} <= \texttt{n\_classes}
* output1: (N) or ()
* output2: "Scalar"
log_prob(input) | https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html | pytorch docs |
output2: "Scalar"
log_prob(input)
Computes log probabilities for all \texttt{n\_classes}
Parameters:
**input** (*Tensor*) -- a minibatch of examples
Returns:
log-probabilities of for each class c in range 0 <= c <=
\texttt{n\_classes}, where \texttt{n\_classes} is a parameter
passed to "AdaptiveLogSoftmaxWithLoss" constructor.
Return type:
*Tensor*
Shape:
* Input: (N, \texttt{in\_features})
* Output: (N, \texttt{n\_classes})
predict(input)
This is equivalent to *self.log_prob(input).argmax(dim=1)*, but
is more efficient in some cases.
Parameters:
**input** (*Tensor*) -- a minibatch of examples
Returns:
a class with the highest probability for each example
Return type:
output (Tensor)
Shape:
* Input: (N, \texttt{in\_features})
* Output: (N)
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html | pytorch docs |
torch.atleast_1d
torch.atleast_1d(*tensors)
Returns a 1-dimensional view of each input tensor with zero
dimensions. Input tensors with one or more dimensions are returned
as-is.
Parameters:
input (Tensor or list of Tensors) --
Returns:
output (Tensor or tuple of Tensors)
Example:
>>> x = torch.arange(2)
>>> x
tensor([0, 1])
>>> torch.atleast_1d(x)
tensor([0, 1])
>>> x = torch.tensor(1.)
>>> x
tensor(1.)
>>> torch.atleast_1d(x)
tensor([1.])
>>> x = torch.tensor(0.5)
>>> y = torch.tensor(1.)
>>> torch.atleast_1d((x, y))
(tensor([0.5000]), tensor([1.]))
| https://pytorch.org/docs/stable/generated/torch.atleast_1d.html | pytorch docs |
torch.set_flush_denormal
torch.set_flush_denormal(mode) -> bool
Disables denormal floating numbers on CPU.
Returns "True" if your system supports flushing denormal numbers
and it successfully configures flush denormal mode.
"set_flush_denormal()" is only supported on x86 architectures
supporting SSE3.
Parameters:
mode (bool) -- Controls whether to enable flush denormal
mode or not
Example:
>>> torch.set_flush_denormal(True)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor([ 0.], dtype=torch.float64)
>>> torch.set_flush_denormal(False)
True
>>> torch.tensor([1e-323], dtype=torch.float64)
tensor(9.88131e-324 *
[ 1.0000], dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.set_flush_denormal.html | pytorch docs |
SiLU
class torch.nn.SiLU(inplace=False)
Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The
SiLU function is also known as the swish function.
\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{
is the logistic sigmoid.}
Note:
See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid
Linear Unit) was originally coined, and see Sigmoid-Weighted
Linear Units for Neural Network Function Approximation in
Reinforcement Learning and Swish: a Self-Gated Activation
Function where the SiLU was experimented with later.
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.SiLU()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html | pytorch docs |
torch.Tensor.flatten
Tensor.flatten(start_dim=0, end_dim=- 1) -> Tensor
See "torch.flatten()" | https://pytorch.org/docs/stable/generated/torch.Tensor.flatten.html | pytorch docs |
ELU
class torch.ao.nn.quantized.ELU(scale, zero_point, alpha=1.0)
This is the quantized equivalent of "ELU".
Parameters:
* scale -- quantization scale of the output tensor
* **zero_point** -- quantization zero point of the output tensor
* **alpha** (*float*) -- the alpha constant
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ELU.html | pytorch docs |
torch.nn.functional.torch.nn.parallel.data_parallel
torch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)
Evaluates module(input) in parallel across the GPUs given in
device_ids.
This is the functional version of the DataParallel module.
Parameters:
* module (Module) -- the module to evaluate in parallel
* **inputs** (*Tensor*) -- inputs to the module
* **device_ids** (*list of python:int** or **torch.device*) --
GPU ids on which to replicate module
* **output_device** (*list of python:int** or **torch.device*)
-- GPU location of the output Use -1 to indicate the CPU.
(default: device_ids[0])
Returns:
a Tensor containing the result of module(input) located on
output_device | https://pytorch.org/docs/stable/generated/torch.nn.functional.torch.nn.parallel.data_parallel.html | pytorch docs |
torch.cuda.init
torch.cuda.init()
Initialize PyTorch's CUDA state. You may need to call this
explicitly if you are interacting with PyTorch via its C API, as
Python bindings for CUDA functionality will not be available until
this initialization takes place. Ordinary users should not need
this, as all of PyTorch's CUDA methods automatically initialize
CUDA state on-demand.
Does nothing if the CUDA state is already initialized. | https://pytorch.org/docs/stable/generated/torch.cuda.init.html | pytorch docs |
torch.Tensor.real
Tensor.real
Returns a new tensor containing real values of the "self" tensor
for a complex-valued input tensor. The returned tensor and "self"
share the same underlying storage.
Returns "self" if "self" is a real-valued tensor tensor.
Example::
>>> x=torch.randn(4, dtype=torch.cfloat)
>>> x
tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])
>>> x.real
tensor([ 0.3100, -0.5445, -1.6492, -0.0638]) | https://pytorch.org/docs/stable/generated/torch.Tensor.real.html | pytorch docs |
torch.signal.windows.bartlett
torch.signal.windows.bartlett(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the Bartlett window.
The Bartlett window is defined as follows:
w_n = 1 - \left| \frac{2n}{M - 1} - 1 \right| = \begin{cases}
\frac{2n}{M - 1} & \text{if } 0 \leq n \leq \frac{M - 1}{2} \\
2 - \frac{2n}{M - 1} & \text{if } \frac{M - 1}{2} < n < M \\
\end{cases}
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* sym (bool, optional) -- If False, returns a
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True. | https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html | pytorch docs |
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric Bartlett window.
>>> torch.signal.windows.bartlett(10)
| https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html | pytorch docs |
torch.signal.windows.bartlett(10)
tensor([0.0000, 0.2222, 0.4444, 0.6667, 0.8889, 0.8889, 0.6667, 0.4444, 0.2222, 0.0000])
>>> # Generates a periodic Bartlett window.
>>> torch.signal.windows.bartlett(10, sym=False)
tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html | pytorch docs |
torch.poisson
torch.poisson(input, generator=None) -> Tensor
Returns a tensor of the same size as "input" with each element
sampled from a Poisson distribution with rate parameter given by
the corresponding element in "input" i.e.,
\text{out}_i \sim \text{Poisson}(\text{input}_i)
"input" must be non-negative.
Parameters:
input (Tensor) -- the input tensor containing the rates of
the Poisson distribution
Keyword Arguments:
generator ("torch.Generator", optional) -- a pseudorandom
number generator for sampling
Example:
>>> rates = torch.rand(4, 4) * 5 # rate parameter between 0 and 5
>>> torch.poisson(rates)
tensor([[9., 1., 3., 5.],
[8., 6., 6., 0.],
[0., 4., 5., 3.],
[2., 1., 4., 2.]])
| https://pytorch.org/docs/stable/generated/torch.poisson.html | pytorch docs |
torch.asin
torch.asin(input, *, out=None) -> Tensor
Returns a new tensor with the arcsine of the elements of "input".
\text{out}_{i} = \sin^{-1}(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-0.5962, 1.4985, -0.4396, 1.4525])
>>> torch.asin(a)
tensor([-0.6387, nan, -0.4552, nan])
| https://pytorch.org/docs/stable/generated/torch.asin.html | pytorch docs |
torch.Tensor.arcsin_
Tensor.arcsin_() -> Tensor
In-place version of "arcsin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arcsin_.html | pytorch docs |
torch.Tensor.geqrf
Tensor.geqrf()
See "torch.geqrf()" | https://pytorch.org/docs/stable/generated/torch.Tensor.geqrf.html | pytorch docs |
torch.Tensor.where
Tensor.where(condition, y) -> Tensor
"self.where(condition, y)" is equivalent to "torch.where(condition,
self, y)". See "torch.where()" | https://pytorch.org/docs/stable/generated/torch.Tensor.where.html | pytorch docs |
torch.sym_min
torch.sym_min(a, b)
SymInt-aware utility for max(). | https://pytorch.org/docs/stable/generated/torch.sym_min.html | pytorch docs |
torch.index_add
torch.index_add(input, dim, index, source, *, alpha=1, out=None) -> Tensor
See "index_add_()" for function description. | https://pytorch.org/docs/stable/generated/torch.index_add.html | pytorch docs |
HuberLoss
class torch.nn.HuberLoss(reduction='mean', delta=1.0)
Creates a criterion that uses a squared term if the absolute
element-wise error falls below delta and a delta-scaled L1 term
otherwise. This loss combines advantages of both "L1Loss" and
"MSELoss"; the delta-scaled L1 region makes the loss less sensitive
to outliers than "MSELoss", while the L2 region provides smoothness
over "L1Loss" near 0. See Huber loss for more information.
For a batch of size N, the unreduced loss can be described as:
\ell(x, y) = L = \{l_1, ..., l_N\}^T
with
l_n = \begin{cases} 0.5 (x_n - y_n)^2, & \text{if } |x_n - y_n|
< delta \\ delta * (|x_n - y_n| - 0.5 * delta), &
\text{otherwise } \end{cases}
If reduction is not none, then:
\ell(x, y) = \begin{cases} \operatorname{mean}(L), &
\text{if reduction} = \text{`mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
\end{cases}
Note: | https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html | pytorch docs |
\end{cases}
Note:
When delta is set to 1, this loss is equivalent to
"SmoothL1Loss". In general, this loss differs from "SmoothL1Loss"
by a factor of delta (AKA beta in Smooth L1). See "SmoothL1Loss"
for additional discussion on the differences in behavior between
the two losses.
Parameters:
* reduction (str, optional) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Default: "'mean'"
* **delta** (*float**, **optional*) -- Specifies the threshold
at which to change between delta-scaled L1 and L2 loss. The
value must be positive. Default: 1.0
Shape:
* Input: (*) where * means any number of dimensions.
* Target: (*), same shape as the input.
| https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html | pytorch docs |
Target: (*), same shape as the input.
Output: scalar. If "reduction" is "'none'", then (*), same
shape as the input.
| https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html | pytorch docs |
Module
class torch.nn.Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a
tree structure. You can assign the submodules as regular
attributes:
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have
their parameters converted too when you call "to()", etc.
Note:
As per the example above, an "__init__()" call to the parent
class must be made before assignment on the child.
Variables:
training (bool) -- Boolean represents whether this module | https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
is in training or evaluation mode.
add_module(name, module)
Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
Parameters:
* **name** (*str*) -- name of the child module. The child
module can be accessed from this module using the given
name
* **module** (*Module*) -- child module to be added to the
module.
apply(fn)
Applies "fn" recursively to every submodule (as returned by
".children()") as well as self. Typical use includes
initializing the parameters of a model (see also torch.nn.init).
Parameters:
**fn** ("Module" -> None) -- function to be applied to each
submodule
Returns:
self
Return type:
Module
Example:
>>> @torch.no_grad()
>>> def init_weights(m):
>>> print(m)
>>> if type(m) == nn.Linear:
>>> m.weight.fill_(1.0)
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
m.weight.fill_(1.0)
>>> print(m.weight)
>>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
>>> net.apply(init_weights)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
Linear(in_features=2, out_features=2, bias=True)
Parameter containing:
tensor([[1., 1.],
[1., 1.]], requires_grad=True)
Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
bfloat16()
Casts all floating point parameters and buffers to "bfloat16"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
buffers(recurse=True)
Returns an iterator over module buffers.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
Parameters:
recurse (bool) -- if True, then yields buffers of this
module and all submodules. Otherwise, yields only buffers
that are direct members of this module.
Yields:
*torch.Tensor* -- module buffer
Return type:
*Iterator*[*Tensor*]
Example:
>>> for buf in model.buffers():
>>> print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
children()
Returns an iterator over immediate children modules.
Yields:
*Module* -- a child module
Return type:
*Iterator*[*Module*]
cpu()
Moves all model parameters and buffers to the CPU.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
cuda(device=None)
Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
objects. So it should be called before constructing optimizer if
the module will live on GPU while being optimized.
Note:
This method modifies the module in-place.
Parameters:
**device** (*int**, **optional*) -- if specified, all
parameters will be copied to that device
Returns:
self
Return type:
Module
double()
Casts all floating point parameters and buffers to "double"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
eval()
Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations
of particular modules for details of their behaviors in
training/evaluation mode, if they are affected, e.g. "Dropout",
"BatchNorm", etc.
This is equivalent with "self.train(False)".
See Locally disabling gradient computation for a comparison
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
between .eval() and several similar mechanisms that may be
confused with it.
Returns:
self
Return type:
Module
extra_repr()
Set the extra representation of the module
To print customized extra information, you should re-implement
this method in your own modules. Both single-line and multi-line
strings are acceptable.
Return type:
str
float()
Casts all floating point parameters and buffers to "float"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
forward(*input)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note:
Although the recipe for forward pass needs to be defined
within this function, one should call the "Module" instance
afterwards instead of this since the former takes care of
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
running the registered hooks while the latter silently ignores
them.
get_buffer(target)
Returns the buffer given by "target" if it exists, otherwise
throws an error.
See the docstring for "get_submodule" for a more detailed
explanation of this method's functionality as well as how to
correctly specify "target".
Parameters:
**target** (*str*) -- The fully-qualified string name of the
buffer to look for. (See "get_submodule" for how to specify a
fully-qualified string.)
Returns:
The buffer referenced by "target"
Return type:
torch.Tensor
Raises:
**AttributeError** -- If the target string references an
invalid path or resolves to something that is not a
buffer
get_extra_state()
Returns any extra state to include in the module's state_dict.
Implement this and a corresponding "set_extra_state()" for your
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
module if you need to store extra state. This function is called
when building the module's state_dict().
Note that extra state should be picklable to ensure working
serialization of the state_dict. We only provide provide
backwards compatibility guarantees for serializing Tensors;
other objects may break backwards compatibility if their
serialized pickled form changes.
Returns:
Any extra state to store in the module's state_dict
Return type:
object
get_parameter(target)
Returns the parameter given by "target" if it exists, otherwise
throws an error.
See the docstring for "get_submodule" for a more detailed
explanation of this method's functionality as well as how to
correctly specify "target".
Parameters:
**target** (*str*) -- The fully-qualified string name of the
Parameter to look for. (See "get_submodule" for how to
specify a fully-qualified string.)
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
specify a fully-qualified string.)
Returns:
The Parameter referenced by "target"
Return type:
torch.nn.Parameter
Raises:
**AttributeError** -- If the target string references an
invalid path or resolves to something that is not an
"nn.Parameter"
get_submodule(target)
Returns the submodule given by "target" if it exists, otherwise
throws an error.
For example, let's say you have an "nn.Module" "A" that looks
like this:
A(
(net_b): Module(
(net_c): Module(
(conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))
)
(linear): Linear(in_features=100, out_features=200, bias=True)
)
)
(The diagram shows an "nn.Module" "A". "A" has a nested
submodule "net_b", which itself has two submodules "net_c" and
"linear". "net_c" then has a submodule "conv".)
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
To check whether or not we have the "linear" submodule, we would
call "get_submodule("net_b.linear")". To check whether we have
the "conv" submodule, we would call
"get_submodule("net_b.net_c.conv")".
The runtime of "get_submodule" is bounded by the degree of
module nesting in "target". A query against "named_modules"
achieves the same result, but it is O(N) in the number of
transitive modules. So, for a simple check to see if some
submodule exists, "get_submodule" should always be used.
Parameters:
**target** (*str*) -- The fully-qualified string name of the
submodule to look for. (See above example for how to specify
a fully-qualified string.)
Returns:
The submodule referenced by "target"
Return type:
torch.nn.Module
Raises:
**AttributeError** -- If the target string references an
invalid path or resolves to something that is not an
"nn.Module"
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
"nn.Module"
half()
Casts all floating point parameters and buffers to "half"
datatype.
Note:
This method modifies the module in-place.
Returns:
self
Return type:
Module
ipu(device=None)
Moves all model parameters and buffers to the IPU.
This also makes associated parameters and buffers different
objects. So it should be called before constructing optimizer if
the module will live on IPU while being optimized.
Note:
This method modifies the module in-place.
Parameters:
**device** (*int**, **optional*) -- if specified, all
parameters will be copied to that device
Returns:
self
Return type:
Module
load_state_dict(state_dict, strict=True)
Copies parameters and buffers from "state_dict" into this module
and its descendants. If "strict" is "True", then the keys of
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
"state_dict" must exactly match the keys returned by this
module's "state_dict()" function.
Parameters:
* **state_dict** (*dict*) -- a dict containing parameters and
persistent buffers.
* **strict** (*bool**, **optional*) -- whether to strictly
enforce that the keys in "state_dict" match the keys
returned by this module's "state_dict()" function. Default:
"True"
Returns:
* **missing_keys** is a list of str containing the missing
keys
* **unexpected_keys** is a list of str containing the
unexpected keys
Return type:
"NamedTuple" with "missing_keys" and "unexpected_keys" fields
Note:
If a parameter or buffer is registered as "None" and its
corresponding key exists in "state_dict", "load_state_dict()"
will raise a "RuntimeError".
modules()
Returns an iterator over all modules in the network.
Yields:
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
Yields:
Module -- a module in the network
Return type:
*Iterator*[*Module*]
Note:
Duplicate modules are returned only once. In the following
example, "l" will be returned only once.
Example:
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.modules()):
... print(idx, '->', m)
0 -> Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
)
1 -> Linear(in_features=2, out_features=2, bias=True)
named_buffers(prefix='', recurse=True, remove_duplicate=True)
Returns an iterator over module buffers, yielding both the name
of the buffer as well as the buffer itself.
Parameters:
* **prefix** (*str*) -- prefix to prepend to all buffer
names.
* **recurse** (*bool**, **optional*) -- if True, then yields
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
buffers of this module and all submodules. Otherwise,
yields only buffers that are direct members of this module.
Defaults to True.
* **remove_duplicate** (*bool**, **optional*) -- whether to
remove the duplicated buffers in the result. Defaults to
True.
Yields:
*(str, torch.Tensor)* -- Tuple containing the name and buffer
Return type:
*Iterator*[*Tuple*[str, *Tensor*]]
Example:
>>> for name, buf in self.named_buffers():
>>> if name in ['running_var']:
>>> print(buf.size())
named_children()
Returns an iterator over immediate children modules, yielding
both the name of the module as well as the module itself.
Yields:
*(str, Module)* -- Tuple containing a name and child module
Return type:
*Iterator*[*Tuple*[str, *Module*]]
Example:
>>> for name, module in model.named_children():
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
if name in ['conv4', 'conv5']:
>>> print(module)
named_modules(memo=None, prefix='', remove_duplicate=True)
Returns an iterator over all modules in the network, yielding
both the name of the module as well as the module itself.
Parameters:
* **memo** (*Optional**[**Set**[**Module**]**]*) -- a memo to
store the set of modules already added to the result
* **prefix** (*str*) -- a prefix that will be added to the
name of the module
* **remove_duplicate** (*bool*) -- whether to remove the
duplicated module instances in the result or not
Yields:
*(str, Module)* -- Tuple of name and module
Note:
Duplicate modules are returned only once. In the following
example, "l" will be returned only once.
Example:
>>> l = nn.Linear(2, 2)
>>> net = nn.Sequential(l, l)
>>> for idx, m in enumerate(net.named_modules()):
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
... print(idx, '->', m)
0 -> ('', Sequential(
(0): Linear(in_features=2, out_features=2, bias=True)
(1): Linear(in_features=2, out_features=2, bias=True)
))
1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
named_parameters(prefix='', recurse=True, remove_duplicate=True)
Returns an iterator over module parameters, yielding both the
name of the parameter as well as the parameter itself.
Parameters:
* **prefix** (*str*) -- prefix to prepend to all parameter
names.
* **recurse** (*bool*) -- if True, then yields parameters of
this module and all submodules. Otherwise, yields only
parameters that are direct members of this module.
* **remove_duplicate** (*bool**, **optional*) -- whether to
remove the duplicated parameters in the result. Defaults to
True.
Yields:
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
True.
Yields:
*(str, Parameter)* -- Tuple containing the name and parameter
Return type:
*Iterator*[*Tuple*[str, *Parameter*]]
Example:
>>> for name, param in self.named_parameters():
>>> if name in ['bias']:
>>> print(param.size())
parameters(recurse=True)
Returns an iterator over module parameters.
This is typically passed to an optimizer.
Parameters:
**recurse** (*bool*) -- if True, then yields parameters of
this module and all submodules. Otherwise, yields only
parameters that are direct members of this module.
Yields:
*Parameter* -- module parameter
Return type:
*Iterator*[*Parameter*]
Example:
>>> for param in model.parameters():
>>> print(type(param), param.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)
register_backward_hook(hook) | https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
register_backward_hook(hook)
Registers a backward hook on the module.
This function is deprecated in favor of
"register_full_backward_hook()" and the behavior of this
function will change in future versions.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_buffer(name, tensor, persistent=True)
Adds a buffer to the module.
This is typically used to register a buffer that should not to
be considered a model parameter. For example, BatchNorm's
"running_mean" is not a parameter, but is part of the module's
state. Buffers, by default, are persistent and will be saved
alongside parameters. This behavior can be changed by setting
"persistent" to "False". The only difference between a
persistent buffer and a non-persistent buffer is that the latter
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
will not be a part of this module's "state_dict".
Buffers can be accessed as attributes using given names.
Parameters:
* **name** (*str*) -- name of the buffer. The buffer can be
accessed from this module using the given name
* **tensor** (*Tensor** or **None*) -- buffer to be
registered. If "None", then operations that run on buffers,
such as "cuda", are ignored. If "None", the buffer is
**not** included in the module's "state_dict".
* **persistent** (*bool*) -- whether the buffer is part of
this module's "state_dict".
Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
register_forward_hook(hook, *, prepend=False, with_kwargs=False)
Registers a forward hook on the module.
The hook will be called every time after "forward()" has
computed an output.
If "with_kwargs" is "False" or not specified, the input contains
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
only the positional arguments given to the module. Keyword
arguments won't be passed to the hooks and only to the
"forward". The hook can modify the output. It can modify the
input inplace but it will not have effect on forward since this
is called after "forward()" is called. The hook should have the
following signature:
hook(module, args, output) -> None or modified output
If "with_kwargs" is "True", the forward hook will be passed the
"kwargs" given to the forward function and be expected to return
the output possibly modified. The hook should have the following
signature:
hook(module, args, kwargs, output) -> None or modified output
Parameters:
* **hook** (*Callable*) -- The user defined hook to be
registered.
* **prepend** (*bool*) -- If "True", the provided "hook" will
be fired before all existing "forward" hooks on this
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "forward" hooks on this
"torch.nn.modules.Module". Note that global "forward" hooks
registered with "register_module_forward_hook()" will fire
before all hooks registered by this method. Default:
"False"
* **with_kwargs** (*bool*) -- If "True", the "hook" will be
passed the kwargs given to the forward function. Default:
"False"
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)
Registers a forward pre-hook on the module.
The hook will be called every time before "forward()" is
invoked.
If "with_kwargs" is false or not specified, the input contains
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
only the positional arguments given to the module. Keyword
arguments won't be passed to the hooks and only to the
"forward". The hook can modify the input. User can either return
a tuple or a single modified value in the hook. We will wrap the
value into a tuple if a single value is returned (unless that
value is already a tuple). The hook should have the following
signature:
hook(module, args) -> None or modified input
If "with_kwargs" is true, the forward pre-hook will be passed
the kwargs given to the forward function. And if the hook
modifies the input, both the args and kwargs should be returned.
The hook should have the following signature:
hook(module, args, kwargs) -> None or a tuple of modified input and kwargs
Parameters:
* **hook** (*Callable*) -- The user defined hook to be
registered.
* **prepend** (*bool*) -- If true, the provided "hook" will
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
be fired before all existing "forward_pre" hooks on this
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "forward_pre" hooks on
this "torch.nn.modules.Module". Note that global
"forward_pre" hooks registered with
"register_module_forward_pre_hook()" will fire before all
hooks registered by this method. Default: "False"
* **with_kwargs** (*bool*) -- If true, the "hook" will be
passed the kwargs given to the forward function. Default:
"False"
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_full_backward_hook(hook, prepend=False)
Registers a backward hook on the module.
The hook will be called every time the gradients with respect to
a module are computed, i.e. the hook will execute if and only if
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
the gradients with respect to module outputs are computed. The
hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The "grad_input" and "grad_output" are tuples that contain the
gradients with respect to the inputs and outputs respectively.
The hook should not modify its arguments, but it can optionally
return a new gradient with respect to the input that will be
used in place of "grad_input" in subsequent computations.
"grad_input" will only correspond to the inputs given as
positional arguments and all kwarg arguments are ignored.
Entries in "grad_input" and "grad_output" will be "None" for all
non-Tensor arguments.
For technical reasons, when this hook is applied to a Module,
its forward function will receive a view of each Tensor passed
to the Module. Similarly the caller will receive a view of each
Tensor returned by the Module's forward function.
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
Warning:
Modifying inputs or outputs inplace is not allowed when using
backward hooks and will raise an error.
Parameters:
* **hook** (*Callable*) -- The user-defined hook to be
registered.
* **prepend** (*bool*) -- If true, the provided "hook" will
be fired before all existing "backward" hooks on this
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "backward" hooks on this
"torch.nn.modules.Module". Note that global "backward"
hooks registered with
"register_module_full_backward_hook()" will fire before all
hooks registered by this method.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_full_backward_pre_hook(hook, prepend=False)
Registers a backward pre-hook on the module.
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
The hook will be called every time the gradients for the module
are computed. The hook should have the following signature:
hook(module, grad_output) -> Tensor or None
The "grad_output" is a tuple. The hook should not modify its
arguments, but it can optionally return a new gradient with
respect to the output that will be used in place of
"grad_output" in subsequent computations. Entries in
"grad_output" will be "None" for all non-Tensor arguments.
For technical reasons, when this hook is applied to a Module,
its forward function will receive a view of each Tensor passed
to the Module. Similarly the caller will receive a view of each
Tensor returned by the Module's forward function.
Warning:
Modifying inputs inplace is not allowed when using backward
hooks and will raise an error.
Parameters:
* **hook** (*Callable*) -- The user-defined hook to be
registered.
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
registered.
* **prepend** (*bool*) -- If true, the provided "hook" will
be fired before all existing "backward_pre" hooks on this
"torch.nn.modules.Module". Otherwise, the provided "hook"
will be fired after all existing "backward_pre" hooks on
this "torch.nn.modules.Module". Note that global
"backward_pre" hooks registered with
"register_module_full_backward_pre_hook()" will fire before
all hooks registered by this method.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_load_state_dict_post_hook(hook)
Registers a post hook to be run after module's "load_state_dict"
is called.
It should have the following signature::
hook(module, incompatible_keys) -> None
The "module" argument is the current module that this hook is
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
registered on, and the "incompatible_keys" argument is a
"NamedTuple" consisting of attributes "missing_keys" and
"unexpected_keys". "missing_keys" is a "list" of "str"
containing the missing keys and "unexpected_keys" is a "list" of
"str" containing the unexpected keys.
The given incompatible_keys can be modified inplace if needed.
Note that the checks performed when calling "load_state_dict()"
with "strict=True" are affected by modifications the hook makes
to "missing_keys" or "unexpected_keys", as expected. Additions
to either set of keys will result in an error being thrown when
"strict=True", and clearing out both missing and unexpected keys
will avoid an error.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle"
register_module(name, module)
Alias for "add_module()".
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
Alias for "add_module()".
register_parameter(name, param)
Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
Parameters:
* **name** (*str*) -- name of the parameter. The parameter
can be accessed from this module using the given name
* **param** (*Parameter** or **None*) -- parameter to be
added to the module. If "None", then operations that run on
parameters, such as "cuda", are ignored. If "None", the
parameter is **not** included in the module's "state_dict".
register_state_dict_pre_hook(hook)
These hooks will be called with arguments: "self", "prefix", and
"keep_vars" before calling "state_dict" on "self". The
registered hooks can be used to perform pre-processing before
the "state_dict" call is made.
requires_grad_(requires_grad=True)
Change if autograd should record operations on parameters in
this module.
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
this module.
This method sets the parameters' "requires_grad" attributes in-
place.
This method is helpful for freezing part of the module for
finetuning or training parts of a model individually (e.g., GAN
training).
See Locally disabling gradient computation for a comparison
between *.requires_grad_()* and several similar mechanisms that
may be confused with it.
Parameters:
**requires_grad** (*bool*) -- whether autograd should record
operations on parameters in this module. Default: "True".
Returns:
self
Return type:
Module
set_extra_state(state)
This function is called from "load_state_dict()" to handle any
extra state found within the *state_dict*. Implement this
function and a corresponding "get_extra_state()" for your module
if you need to store extra state within its *state_dict*.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
Parameters:
state (dict) -- Extra state from the state_dict
share_memory()
See "torch.Tensor.share_memory_()"
Return type:
*T*
state_dict(, destination: T_destination, prefix: str = '', keep_vars: bool = False) -> T_destination
state_dict(, prefix: str = '', keep_vars: bool = False) -> Dict[str, Any]
Returns a dictionary containing references to the whole state of
the module.
Both parameters and persistent buffers (e.g. running averages)
are included. Keys are corresponding parameter and buffer names.
Parameters and buffers set to "None" are not included.
Note:
The returned object is a shallow copy. It contains references
to the module's parameters and buffers.
Warning:
Currently "state_dict()" also accepts positional arguments for
"destination", "prefix" and "keep_vars" in order. However,
this is being deprecated and keyword arguments will be
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
enforced in future releases.
Warning:
Please avoid the use of argument "destination" as it is not
designed for end-users.
Parameters:
* **destination** (*dict**, **optional*) -- If provided, the
state of module will be updated into the dict and the same
object is returned. Otherwise, an "OrderedDict" will be
created and returned. Default: "None".
* **prefix** (*str**, **optional*) -- a prefix added to
parameter and buffer names to compose the keys in
state_dict. Default: "''".
* **keep_vars** (*bool**, **optional*) -- by default the
"Tensor" s returned in the state dict are detached from
autograd. If it's set to "True", detaching will not be
performed. Default: "False".
Returns:
a dictionary containing a whole state of the module
Return type:
dict
Example:
>>> module.state_dict().keys()
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
module.state_dict().keys()
['bias', 'weight']
to(device: Optional[Union[int, device]] = ..., dtype: Optional[Union[dtype, str]] = ..., non_blocking: bool = ...) -> T
to(dtype: Union[dtype, str], non_blocking: bool = ...) -> T
to(tensor: Tensor, non_blocking: bool = ...) -> T
Moves and/or casts the parameters and buffers.
This can be called as
to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)
Its signature is similar to "torch.Tensor.to()", but only
accepts floating point or complex "dtype"s. In addition, this
method will only cast the floating point or complex parameters
and buffers to "dtype" (if given). The integral parameters and
buffers will be moved "device", if that is given, but with
dtypes unchanged. When "non_blocking" is set, it tries to
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
convert/move asynchronously with respect to the host if
possible, e.g., moving CPU Tensors with pinned memory to CUDA
devices.
See below for examples.
Note:
This method modifies the module in-place.
Parameters:
* **device** ("torch.device") -- the desired device of the
parameters and buffers in this module
* **dtype** ("torch.dtype") -- the desired floating point or
complex dtype of the parameters and buffers in this module
* **tensor** (*torch.Tensor*) -- Tensor whose dtype and
device are the desired dtype and device for all parameters
and buffers in this module
* **memory_format** ("torch.memory_format") -- the desired
memory format for 4D parameters and buffers in this module
(keyword only argument)
Returns:
self
Return type:
Module
Examples:
>>> linear = nn.Linear(2, 2)
>>> linear.weight
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j, 0.2382+0.j],
[ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j],
[0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
to_empty(*, device)
Moves the parameters and buffers to the specified device without
copying storage.
Parameters:
**device** ("torch.device") -- The desired device of the
parameters and buffers in this module.
Returns:
self
Return type:
Module
train(mode=True)
Sets the module in training mode.
This has any effect only on certain modules. See documentations
of particular modules for details of their behaviors in
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
training/evaluation mode, if they are affected, e.g. "Dropout",
"BatchNorm", etc.
Parameters:
**mode** (*bool*) -- whether to set training mode ("True") or
evaluation mode ("False"). Default: "True".
Returns:
self
Return type:
Module
type(dst_type)
Casts all parameters and buffers to "dst_type".
Note:
This method modifies the module in-place.
Parameters:
**dst_type** (*type** or **string*) -- the desired type
Returns:
self
Return type:
Module
xpu(device=None)
Moves all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different
objects. So it should be called before constructing optimizer if
the module will live on XPU while being optimized.
Note:
This method modifies the module in-place.
Parameters:
**device** (*int**, **optional*) -- if specified, all
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
parameters will be copied to that device
Returns:
self
Return type:
Module
zero_grad(set_to_none=False)
Sets gradients of all model parameters to zero. See similar
function under "torch.optim.Optimizer" for more context.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. See "torch.optim.Optimizer.zero_grad()"
for details.
| https://pytorch.org/docs/stable/generated/torch.nn.Module.html | pytorch docs |
torch.meshgrid
torch.meshgrid(*tensors, indexing=None)
Creates grids of coordinates specified by the 1D inputs in
attr:tensors.
This is helpful when you want to visualize data over some range of
inputs. See below for a plotting example.
Given N 1D tensors T_0 \ldots T_{N-1} as inputs with corresponding
sizes S_0 \ldots S_{N-1}, this creates N N-dimensional tensors G_0
\ldots G_{N-1}, each with shape (S_0, ..., S_{N-1}) where the
output G_i is constructed by expanding T_i to the result shape.
Note:
0D inputs are treated equivalently to 1D inputs of a single
element.
Warning:
*torch.meshgrid(*tensors)* currently has the same behavior as
calling *numpy.meshgrid(*arrays, indexing='ij')*.In the future
*torch.meshgrid* will transition to *indexing='xy'* as the
default.https://github.com/pytorch/pytorch/issues/50276 tracks
this issue with the goal of migrating to NumPy's behavior.
See also: | https://pytorch.org/docs/stable/generated/torch.meshgrid.html | pytorch docs |
See also:
"torch.cartesian_prod()" has the same effect but it collects the
data in a tensor of vectors.
Parameters:
* tensors (list of Tensor) -- list of scalars or 1
dimensional tensors. Scalars will be treated as tensors of
size (1,) automatically
* **indexing** (*Optional**[**str**]*) --
(str, optional): the indexing mode, either "xy" or "ij",
defaults to "ij". See warning for future changes.
If "xy" is selected, the first dimension corresponds to the
cardinality of the second input and the second dimension
corresponds to the cardinality of the first input.
If "ij" is selected, the dimensions are in the same order as
the cardinality of the inputs.
Returns:
If the input has N tensors of size S_0 \ldots S_{N-1}`, then the
output will also have N tensors, where each tensor is of shape
(S_0, ..., S_{N-1}).
Return type:
seq (sequence of Tensors)
Example: | https://pytorch.org/docs/stable/generated/torch.meshgrid.html | pytorch docs |
seq (sequence of Tensors)
Example:
>>> x = torch.tensor([1, 2, 3])
>>> y = torch.tensor([4, 5, 6])
Observe the element-wise pairings across the grid, (1, 4),
(1, 5), ..., (3, 6). This is the same thing as the
cartesian product.
>>> grid_x, grid_y = torch.meshgrid(x, y, indexing='ij')
>>> grid_x
tensor([[1, 1, 1],
[2, 2, 2],
[3, 3, 3]])
>>> grid_y
tensor([[4, 5, 6],
[4, 5, 6],
[4, 5, 6]])
This correspondence can be seen when these grids are
stacked properly.
>>> torch.equal(torch.cat(tuple(torch.dstack([grid_x, grid_y]))),
... torch.cartesian_prod(x, y))
True
`torch.meshgrid` is commonly used to produce a grid for
plotting.
>>> import matplotlib.pyplot as plt
>>> xs = torch.linspace(-5, 5, steps=100)
>>> ys = torch.linspace(-5, 5, steps=100)
>>> x, y = torch.meshgrid(xs, ys, indexing='xy')
| https://pytorch.org/docs/stable/generated/torch.meshgrid.html | pytorch docs |
z = torch.sin(torch.sqrt(x * x + y * y))
>>> ax = plt.axes(projection='3d')
>>> ax.plot_surface(x.numpy(), y.numpy(), z.numpy())
>>> plt.show()
[image] | https://pytorch.org/docs/stable/generated/torch.meshgrid.html | pytorch docs |
torch.nn.functional.dropout3d
torch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False)
Randomly zero out entire channels (a channel is a 3D feature map,
e.g., the j-th channel of the i-th sample in the batched input is a
3D tensor \text{input}[i, j]) of the input tensor). Each channel
will be zeroed out independently on every forward call with
probability "p" using samples from a Bernoulli distribution.
See "Dropout3d" for details.
Parameters:
* p (float) -- probability of a channel to be zeroed.
Default: 0.5
* **training** (*bool*) -- apply dropout if is "True". Default:
"True"
* **inplace** (*bool*) -- If set to "True", will do this
operation in-place. Default: "False"
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout3d.html | pytorch docs |
MSELoss
class torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')
Creates a criterion that measures the mean squared error (squared
L2 norm) between each element in the input x and target y.
The unreduced (i.e. with "reduction" set to "'none'") loss can be
described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n
- y_n \right)^2,
where N is the batch size. If "reduction" is not "'none'" (default
"'mean'"), then:
\ell(x, y) = \begin{cases} \operatorname{mean}(L), &
\text{if reduction} = \text{`mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
\end{cases}
x and y are tensors of arbitrary shapes with a total of n elements
each.
The mean operation still operates over all the elements, and
divides by n.
The division by n can be avoided if one sets "reduction = 'sum'".
Parameters:
* size_average (bool, optional) -- Deprecated (see | https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html | pytorch docs |
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
| https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html | pytorch docs |
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (*), where * means any number of dimensions.
* Target: (*), same shape as the input.
Examples:
>>> loss = nn.MSELoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5)
>>> output = loss(input, target)
>>> output.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html | pytorch docs |
torch.seed
torch.seed()
Sets the seed for generating random numbers to a non-deterministic
random number. Returns a 64 bit number used to seed the RNG.
Return type:
int | https://pytorch.org/docs/stable/generated/torch.seed.html | pytorch docs |
torch.linalg.eigh
torch.linalg.eigh(A, UPLO='L', *, out=None)
Computes the eigenvalue decomposition of a complex Hermitian or
real symmetric matrix.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the eigenvalue
decomposition of a complex Hermitian or real symmetric matrix A
\in \mathbb{K}^{n \times n} is defined as
A = Q \operatorname{diag}(\Lambda) Q^{\text{H}}\mathrlap{\qquad
Q \in \mathbb{K}^{n \times n}, \Lambda \in \mathbb{R}^n}
where Q^{\text{H}} is the conjugate transpose when Q is complex,
and the transpose when Q is real-valued. Q is orthogonal in the
real case and unitary in the complex case.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
"A" is assumed to be Hermitian (resp. symmetric), but this is not
checked internally, instead: | https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html | pytorch docs |
checked internally, instead:
If "UPLO"= 'L' (default), only the lower triangular part of the
matrix is used in the computation.
If "UPLO"= 'U', only the upper triangular part of the matrix is
used.
The eigenvalues are returned in ascending order.
Note:
When inputs are on a CUDA device, this function synchronizes that
device with the CPU.
Note:
The eigenvalues of real symmetric or complex Hermitian matrices
are always real.
Warning:
The eigenvectors of a symmetric matrix are not unique, nor are
they continuous with respect to "A". Due to this lack of
uniqueness, different hardware and software may compute different
eigenvectors.This non-uniqueness is caused by the fact that
multiplying an eigenvector by *-1* in the real case or by e^{i
\phi}, \phi \in \mathbb{R} in the complex case produces another
set of valid eigenvectors of the matrix. For this reason, the
| https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html | pytorch docs |
loss function shall not depend on the phase of the eigenvectors,
as this quantity is not well-defined. This is checked for complex
inputs when computing the gradients of this function. As such,
when inputs are complex and are on a CUDA device, the computation
of the gradients of this function synchronizes that device with
the CPU.
Warning:
Gradients computed using the *eigenvectors* tensor will only be
finite when "A" has distinct eigenvalues. Furthermore, if the
distance between any two eigenvalues is close to zero, the
gradient will be numerically unstable, as it depends on the
eigenvalues \lambda_i through the computation of \frac{1}{\min_{i
\neq j} \lambda_i - \lambda_j}.
See also:
"torch.linalg.eigvalsh()" computes only the eigenvalues of a
Hermitian matrix. Unlike "torch.linalg.eigh()", the gradients of
"eigvalsh()" are always numerically stable.
"torch.linalg.cholesky()" for a different decomposition of a
| https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html | pytorch docs |
Hermitian matrix. The Cholesky decomposition gives less
information about the matrix but is much faster to compute than
the eigenvalue decomposition.
"torch.linalg.eig()" for a (slower) function that computes the
eigenvalue decomposition of a not necessarily Hermitian square
matrix.
"torch.linalg.svd()" for a (slower) function that computes the
more general SVD decomposition of matrices of any shape.
"torch.linalg.qr()" for another (much faster) decomposition that
works on general matrices.
Parameters:
* A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions consisting of symmetric or
Hermitian matrices.
* **UPLO** (*'L'**, **'U'**, **optional*) -- controls whether to
use the upper or lower triangular part of "A" in the
computations. Default: *'L'*.
Keyword Arguments:
out (tuple, optional) -- output tuple of two tensors. | https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html | pytorch docs |
Ignored if None. Default: None.
Returns:
A named tuple (eigenvalues, eigenvectors) which corresponds to
\Lambda and Q above.
*eigenvalues* will always be real-valued, even when "A" is
complex. It will also be ordered in ascending order.
*eigenvectors* will have the same dtype as "A" and will contain
the eigenvectors as its columns.
Examples::
>>> A = torch.randn(2, 2, dtype=torch.complex128)
>>> A = A + A.T.conj() # creates a Hermitian matrix
>>> A
tensor([[2.9228+0.0000j, 0.2029-0.0862j],
[0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)
>>> L, Q = torch.linalg.eigh(A)
>>> L
tensor([0.3277, 2.9415], dtype=torch.float64)
>>> Q
tensor([[-0.0846+-0.0000j, -0.9964+0.0000j],
[ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128)
>>> torch.dist(Q @ torch.diag(L.cdouble()) @ Q.T.conj(), A)
tensor(6.1062e-16, dtype=torch.float64) | https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html | pytorch docs |
tensor(6.1062e-16, dtype=torch.float64)
>>> A = torch.randn(3, 2, 2, dtype=torch.float64)
>>> A = A + A.mT # creates a batch of symmetric matrices
>>> L, Q = torch.linalg.eigh(A)
>>> torch.dist(Q @ torch.diag_embed(L) @ Q.mH, A)
tensor(1.5423e-15, dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html | pytorch docs |
torch.Tensor.index_fill
Tensor.index_fill(dim, index, value) -> Tensor
Out-of-place version of "torch.Tensor.index_fill_()". | https://pytorch.org/docs/stable/generated/torch.Tensor.index_fill.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.