text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
LazyConv2d
class torch.nn.LazyConv2d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
A "torch.nn.Conv2d" module with lazy initialization of the
"in_channels" argument of the "Conv2d" that is inferred from the
"input.size(1)". The attributes that will be lazily initialized are
weight and bias.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* out_channels (int) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- Zero-padding
added to both sides of the input. Default: 0
* **padding_mode** (*str**, **optional*) -- "'zeros'",
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConv2d.html | pytorch docs |
"'reflect'", "'replicate'" or "'circular'". Default: "'zeros'"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
See also:
"torch.nn.Conv2d" and "torch.nn.modules.lazy.LazyModuleMixin"
cls_to_become
alias of "Conv2d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyConv2d.html | pytorch docs |
torch.Tensor.xlogy_
Tensor.xlogy_(other) -> Tensor
In-place version of "xlogy()" | https://pytorch.org/docs/stable/generated/torch.Tensor.xlogy_.html | pytorch docs |
torch.cuda.get_device_properties
torch.cuda.get_device_properties(device)
Gets the properties of a device.
Parameters:
device (torch.device or int or str) -- device for
which to return the properties of the device.
Returns:
the properties of the device
Return type:
_CudaDeviceProperties | https://pytorch.org/docs/stable/generated/torch.cuda.get_device_properties.html | pytorch docs |
torch.Tensor.ldexp_
Tensor.ldexp_(other) -> Tensor
In-place version of "ldexp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ldexp_.html | pytorch docs |
torch.kron
torch.kron(input, other, *, out=None) -> Tensor
Computes the Kronecker product, denoted by \otimes, of "input" and
"other".
If "input" is a (a_0 \times a_1 \times \dots \times a_n) tensor and
"other" is a (b_0 \times b_1 \times \dots \times b_n) tensor, the
result will be a (a_0b_0 \times a_1b_1 \times \dots \times
a_n*b_n) tensor with the following entries:
(\text{input} \otimes \text{other})_{k_0, k_1, \dots, k_n} =
\text{input}_{i_0, i_1, \dots, i_n} * \text{other}_{j_0, j_1,
\dots, j_n},
where k_t = i_t * b_t + j_t for 0 \leq t \leq n. If one tensor has
fewer dimensions than the other it is unsqueezed until it has the
same number of dimensions.
Supports real-valued and complex-valued inputs.
Note:
This function generalizes the typical definition of the Kronecker
product for two matrices to two tensors, as described above. When
"input" is a (m \times n) matrix and "other" is a (p \times q)
| https://pytorch.org/docs/stable/generated/torch.kron.html | pytorch docs |
matrix, the result will be a (pm \times qn) block matrix:
\mathbf{A} \otimes \mathbf{B}=\begin{bmatrix} a_{11}
\mathbf{B} & \cdots & a_{1 n} \mathbf{B} \\ \vdots & \ddots &
\vdots \\ a_{m 1} \mathbf{B} & \cdots & a_{m n} \mathbf{B}
\end{bmatrix}
where "input" is \mathbf{A} and "other" is \mathbf{B}.
Parameters:
* input (Tensor) --
* **other** (*Tensor*) --
Keyword Arguments:
out (Tensor, optional) -- The output tensor. Ignored
if "None". Default: "None"
Examples:
>>> mat1 = torch.eye(2)
>>> mat2 = torch.ones(2, 2)
>>> torch.kron(mat1, mat2)
tensor([[1., 1., 0., 0.],
[1., 1., 0., 0.],
[0., 0., 1., 1.],
[0., 0., 1., 1.]])
>>> mat1 = torch.eye(2)
>>> mat2 = torch.arange(1, 5).reshape(2, 2)
>>> torch.kron(mat1, mat2)
tensor([[1., 2., 0., 0.],
[3., 4., 0., 0.],
[0., 0., 1., 2.],
| https://pytorch.org/docs/stable/generated/torch.kron.html | pytorch docs |
[0., 0., 1., 2.],
[0., 0., 3., 4.]]) | https://pytorch.org/docs/stable/generated/torch.kron.html | pytorch docs |
torch.fft.ihfft
torch.fft.ihfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor
Computes the inverse of "hfft()".
"input" must be a real-valued signal, interpreted in the Fourier
domain. The IFFT of a real signal is Hermitian-symmetric, "X[i] =
conj(X[-i])". "ihfft()" represents this in the one-sided form where
only the positive frequencies below the Nyquist frequency are
included. To compute the full output, use "ifft()".
Note:
Supports torch.half on CUDA with GPU Architecture SM53 or
greater. However it only supports powers of 2 signal length in
every transformed dimension.
Parameters:
* input (Tensor) -- the real input tensor
* **n** (*int**, **optional*) -- Signal length. If given, the
input will either be zero-padded or trimmed to this length
before computing the Hermitian IFFT.
* **dim** (*int**, **optional*) -- The dimension along which to
take the one dimensional Hermitian IFFT.
| https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html | pytorch docs |
take the one dimensional Hermitian IFFT.
* **norm** (*str**, **optional*) --
Normalization mode. For the backward transform ("ihfft()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the IFFT
orthonormal)
Calling the forward transform ("hfft()") with the same
normalization mode will apply an overall normalization of
"1/n" between the two transforms. This is required to make
"ihfft()" the exact inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.arange(5)
t
tensor([0, 1, 2, 3, 4])
torch.fft.ihfft(t)
tensor([ 2.0000-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j])
Compare against the full output from "ifft()":
torch.fft.ifft(t)
| https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html | pytorch docs |
torch.fft.ifft(t)
tensor([ 2.0000-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j, -0.5000+0.1625j,
-0.5000+0.6882j])
| https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html | pytorch docs |
torch.nn.modules.module.register_module_full_backward_hook
torch.nn.modules.module.register_module_full_backward_hook(hook)
Registers a backward hook common to all the modules.
Warning:
This adds global state to the *nn.module* module and it is only
intended for debugging/profiling purposes.
The hook will be called every time the gradients with respect to a
module are computed, i.e. the hook will execute if and only if the
gradients with respect to module outputs are computed. The hook
should have the following signature:
hook(module, grad_input, grad_output) -> Tensor or None
The "grad_input" and "grad_output" are tuples. The hook should not
modify its arguments, but it can optionally return a new gradient
with respect to the input that will be used in place of
"grad_input" in subsequent computations. "grad_input" will only
correspond to the inputs given as positional arguments and all | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_full_backward_hook.html | pytorch docs |
kwarg arguments will not appear in the hook. Entries in
"grad_input" and "grad_output" will be "None" for all non-Tensor
arguments.
For technical reasons, when this hook is applied to a Module, its
forward function will receive a view of each Tensor passed to the
Module. Similarly the caller will receive a view of each Tensor
returned by the Module's forward function.
Global hooks are called before hooks registered with
register_backward_hook
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemovableHandle" | https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_full_backward_hook.html | pytorch docs |
torch.Tensor.polygamma
Tensor.polygamma(n) -> Tensor
See "torch.polygamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.polygamma.html | pytorch docs |
torch.jit.annotate
torch.jit.annotate(the_type, the_value)
This method is a pass-through function that returns the_value,
used to hint TorchScript compiler the type of the_value. It is a
no-op when running outside of TorchScript.
Though TorchScript can infer correct type for most Python
expressions, there are some cases where type inference can be
wrong, including:
Empty containers like [] and {}, which TorchScript assumes to
be container of Tensor
Optional types like Optional[T] but assigned a valid value of
type T, TorchScript would assume it is type T rather than
Optional[T]
Note that annotate() does not help in init method of
torch.nn.Module subclasses because it is executed in eager mode.
To annotate types of torch.nn.Module attributes, use "Annotate()"
instead.
Example:
import torch
from typing import Dict
@torch.jit.script
def fn():
| https://pytorch.org/docs/stable/generated/torch.jit.annotate.html | pytorch docs |
@torch.jit.script
def fn():
# Telling TorchScript that this empty dictionary is a (str -> int) dictionary
# instead of default dictionary type of (str -> Tensor).
d = torch.jit.annotate(Dict[str, int], {})
# Without `torch.jit.annotate` above, following statement would fail because of
# type mismatch.
d["name"] = 20
Parameters:
* the_type -- Python type that should be passed to
TorchScript compiler as type hint for the_value
* **the_value** -- Value or expression to hint type for.
Returns:
the_value is passed back as return value. | https://pytorch.org/docs/stable/generated/torch.jit.annotate.html | pytorch docs |
torch.isfinite
torch.isfinite(input) -> Tensor
Returns a new tensor with boolean elements representing if each
element is finite or not.
Real values are finite when they are not NaN, negative infinity, or
infinity. Complex values are finite when both their real and
imaginary parts are finite.
Parameters:
input (Tensor) -- the input tensor.
Returns:
A boolean tensor that is True where "input" is finite and False
elsewhere
Example:
>>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))
tensor([True, False, True, False, False])
| https://pytorch.org/docs/stable/generated/torch.isfinite.html | pytorch docs |
torch.set_rng_state
torch.set_rng_state(new_state)
Sets the random number generator state.
Parameters:
new_state (torch.ByteTensor) -- The desired state | https://pytorch.org/docs/stable/generated/torch.set_rng_state.html | pytorch docs |
FixedQParamsFakeQuantize
class torch.quantization.fake_quantize.FixedQParamsFakeQuantize(observer)
Simulate quantize and dequantize with fixed quantization parameters
in training time. Only per tensor quantization is supported. | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FixedQParamsFakeQuantize.html | pytorch docs |
torch.greater
torch.greater(input, other, *, out=None) -> Tensor
Alias for "torch.gt()". | https://pytorch.org/docs/stable/generated/torch.greater.html | pytorch docs |
torch.Tensor.greater_equal
Tensor.greater_equal(other) -> Tensor
See "torch.greater_equal()". | https://pytorch.org/docs/stable/generated/torch.Tensor.greater_equal.html | pytorch docs |
torch.Tensor.sort
Tensor.sort(dim=- 1, descending=False)
See "torch.sort()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sort.html | pytorch docs |
torch.linspace
torch.linspace(start, end, steps, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Creates a one-dimensional tensor of size "steps" whose values are
evenly spaced from "start" to "end", inclusive. That is, the value
are:
(\text{start}, \text{start} + \frac{\text{end} -
\text{start}}{\text{steps} - 1}, \ldots, \text{start} +
(\text{steps} - 2) * \frac{\text{end} -
\text{start}}{\text{steps} - 1}, \text{end})
From PyTorch 1.11 linspace requires the steps argument. Use
steps=100 to restore the previous behavior.
Parameters:
* start (float) -- the starting value for the set of
points
* **end** (*float*) -- the ending value for the set of points
* **steps** (*int*) -- size of the constructed tensor
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** (*torch.dtype**, **optional*) -- the data type to
| https://pytorch.org/docs/stable/generated/torch.linspace.html | pytorch docs |
perform the computation in. Default: if None, uses the global
default dtype (see torch.get_default_dtype()) when both
"start" and "end" are real, and corresponding complex dtype
when either is complex.
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Example:
>>> torch.linspace(3, 10, steps=5)
tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])
>>> torch.linspace(-10, 10, steps=5)
| https://pytorch.org/docs/stable/generated/torch.linspace.html | pytorch docs |
torch.linspace(-10, 10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=5)
tensor([-10., -5., 0., 5., 10.])
>>> torch.linspace(start=-10, end=10, steps=1)
tensor([-10.])
| https://pytorch.org/docs/stable/generated/torch.linspace.html | pytorch docs |
elu
class torch.ao.nn.quantized.functional.elu(input, scale, zero_point, alpha=1.0)
This is the quantized version of "elu()".
Parameters:
* input (Tensor) -- quantized input
* **scale** (*float*) -- quantization scale of the output tensor
* **zero_point** (*int*) -- quantization zero point of the
output tensor
* **alpha** (*float*) -- the alpha constant
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.elu.html | pytorch docs |
torch.nn.functional.pairwise_distance
torch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-6, keepdim=False) -> Tensor
See "torch.nn.PairwiseDistance" for details | https://pytorch.org/docs/stable/generated/torch.nn.functional.pairwise_distance.html | pytorch docs |
torch.nn.functional.multi_margin_loss
torch.nn.functional.multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None, reduce=None, reduction='mean') -> Tensor
See "MultiMarginLoss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.multi_margin_loss.html | pytorch docs |
PolynomialLR
class torch.optim.lr_scheduler.PolynomialLR(optimizer, total_iters=5, power=1.0, last_epoch=- 1, verbose=False)
Decays the learning rate of each parameter group using a polynomial
function in the given total_iters. When last_epoch=-1, sets initial
lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **total_iters** (*int*) -- The number of steps that the
scheduler decays the learning rate. Default: 5.
* **power** (*int*) -- The power of the polynomial. Default:
1.0.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
Assuming optimizer uses lr = 0.001 for all groups
lr = 0.001 if epoch == 0
lr = 0.00075 if epoch == 1
lr = 0.00050 if epoch == 2
lr = 0.00025 if epoch == 3
lr = 0.0 if epoch >= 4
scheduler = PolynomialLR(self.opt, total_iters=4, power=1.0)
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html | pytorch docs |
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html | pytorch docs |
torch.Tensor.flip
Tensor.flip(dims) -> Tensor
See "torch.flip()" | https://pytorch.org/docs/stable/generated/torch.Tensor.flip.html | pytorch docs |
ReflectionPad2d
class torch.nn.ReflectionPad2d(padding)
Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in all boundaries. If a 4-tuple,
uses (\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom})
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out})
where
H_{out} = H_{in} + \text{padding\_top} +
\text{padding\_bottom}
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ReflectionPad2d(2)
>>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)
>>> input
tensor([[[[0., 1., 2.],
[3., 4., 5.],
[6., 7., 8.]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html | pytorch docs |
[6., 7., 8.]]]])
>>> m(input)
tensor([[[[8., 7., 6., 7., 8., 7., 6.],
[5., 4., 3., 4., 5., 4., 3.],
[2., 1., 0., 1., 2., 1., 0.],
[5., 4., 3., 4., 5., 4., 3.],
[8., 7., 6., 7., 8., 7., 6.],
[5., 4., 3., 4., 5., 4., 3.],
[2., 1., 0., 1., 2., 1., 0.]]]])
>>> # using different paddings for different sides
>>> m = nn.ReflectionPad2d((1, 1, 2, 0))
>>> m(input)
tensor([[[[7., 6., 7., 8., 7.],
[4., 3., 4., 5., 4.],
[1., 0., 1., 2., 1.],
[4., 3., 4., 5., 4.],
[7., 6., 7., 8., 7.]]]]) | https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html | pytorch docs |
torch.Tensor.take
Tensor.take(indices) -> Tensor
See "torch.take()" | https://pytorch.org/docs/stable/generated/torch.Tensor.take.html | pytorch docs |
torch.matmul
torch.matmul(input, other, *, out=None) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as
follows:
If both tensors are 1-dimensional, the dot product (scalar) is
returned.
If both arguments are 2-dimensional, the matrix-matrix product is
returned.
If the first argument is 1-dimensional and the second argument is
2-dimensional, a 1 is prepended to its dimension for the purpose
of the matrix multiply. After the matrix multiply, the prepended
dimension is removed.
If the first argument is 2-dimensional and the second argument is
1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one
argument is N-dimensional (where N > 2), then a batched matrix
multiply is returned. If the first argument is 1-dimensional, a
1 is prepended to its dimension for the purpose of the batched
| https://pytorch.org/docs/stable/generated/torch.matmul.html | pytorch docs |
matrix multiply and removed after. If the second argument is
1-dimensional, a 1 is appended to its dimension for the purpose
of the batched matrix multiple and removed after. The non-matrix
(i.e. batch) dimensions are broadcasted (and thus must be
broadcastable). For example, if "input" is a (j \times 1 \times
n \times n) tensor and "other" is a (k \times n \times n) tensor,
"out" will be a (j \times k \times n \times n) tensor.
Note that the broadcasting logic only looks at the batch
dimensions when determining if the inputs are broadcastable, and
not the matrix dimensions. For example, if "input" is a (j \times
1 \times n \times m) tensor and "other" is a (k \times m \times
p) tensor, these inputs are valid for broadcasting even though
the final two dimensions (i.e. the matrix dimensions) are
different. "out" will be a (j \times k \times n \times p) tensor.
This operation has support for arguments with sparse layouts. In | https://pytorch.org/docs/stable/generated/torch.matmul.html | pytorch docs |
particular the matrix-matrix (both arguments 2-dimensional)
supports sparse arguments with the same restrictions as
"torch.mm()"
Warning:
Sparse support is a beta feature and some layout(s)/dtype/device
combinations may not be supported, or may not have autograd
support. If you notice missing functionality please open a
feature request.
This operator supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Note:
The 1-dimensional dot product version of this function does not
support an "out" parameter.
Parameters:
* input (Tensor) -- the first tensor to be multiplied
* **other** (*Tensor*) -- the second tensor to be multiplied
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> # vector x vector
>>> tensor1 = torch.randn(3)
>>> tensor2 = torch.randn(3)
| https://pytorch.org/docs/stable/generated/torch.matmul.html | pytorch docs |
tensor2 = torch.randn(3)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([])
>>> # matrix x vector
>>> tensor1 = torch.randn(3, 4)
>>> tensor2 = torch.randn(4)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([3])
>>> # batched matrix x broadcasted vector
>>> tensor1 = torch.randn(10, 3, 4)
>>> tensor2 = torch.randn(4)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3])
>>> # batched matrix x batched matrix
>>> tensor1 = torch.randn(10, 3, 4)
>>> tensor2 = torch.randn(10, 4, 5)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3, 5])
>>> # batched matrix x broadcasted matrix
>>> tensor1 = torch.randn(10, 3, 4)
>>> tensor2 = torch.randn(4, 5)
>>> torch.matmul(tensor1, tensor2).size()
torch.Size([10, 3, 5])
| https://pytorch.org/docs/stable/generated/torch.matmul.html | pytorch docs |
default_eval_fn
class torch.quantization.default_eval_fn(model, calib_data)
Default evaluation function takes a torch.utils.data.Dataset or a
list of input Tensors and run the model on the dataset | https://pytorch.org/docs/stable/generated/torch.quantization.default_eval_fn.html | pytorch docs |
Linear
class torch.ao.nn.quantized.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)
A quantized linear module with quantized tensor as inputs and
outputs. We adopt the same interface as torch.nn.Linear, please
see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for
documentation.
Similar to "Linear", attributes will be randomly initialized at
module creation time and will be overwritten later
Variables:
* weight (Tensor) -- the non-learnable quantized weights
of the module of shape (\text{out_features},
\text{in_features}).
* **bias** (*Tensor*) -- the non-learnable bias of the module of
shape (\text{out\_features}). If "bias" is "True", the values
are initialized to zero.
* **scale** -- *scale* parameter of output Quantized Tensor,
type: double
* **zero_point** -- *zero_point* parameter for output Quantized
Tensor, type: long
Examples: | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html | pytorch docs |
Tensor, type: long
Examples:
>>> m = nn.quantized.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> input = torch.quantize_per_tensor(input, 1.0, 0, torch.quint8)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
classmethod from_float(mod)
Create a quantized module from an observed float module
Parameters:
**mod** (*Module*) -- a float module, either produced by
torch.ao.quantization utilities or provided by the user
classmethod from_reference(ref_qlinear, output_scale, output_zero_point)
Create a (fbgemm/qnnpack) quantized module from a reference
quantized module
Parameters:
* **ref_qlinear** (*Module*) -- a reference quantized linear
module, either produced by torch.ao.quantization utilities
or provided by the user
* **output_scale** (*float*) -- scale for output Tensor
* **output_zero_point** (*int*) -- zero point for output
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html | pytorch docs |
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html | pytorch docs |
torch.lerp
torch.lerp(input, end, weight, *, out=None)
Does a linear interpolation of two tensors "start" (given by
"input") and "end" based on a scalar or tensor "weight" and returns
the resulting "out" tensor.
\text{out}_i = \text{start}_i + \text{weight}_i \times
(\text{end}_i - \text{start}_i)
The shapes of "start" and "end" must be broadcastable. If "weight"
is a tensor, then the shapes of "weight", "start", and "end" must
be broadcastable.
Parameters:
* input (Tensor) -- the tensor with the starting points
* **end** (*Tensor*) -- the tensor with the ending points
* **weight** (*float** or **tensor*) -- the weight for the
interpolation formula
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> start = torch.arange(1., 5.)
>>> end = torch.empty(4).fill_(10)
>>> start
tensor([ 1., 2., 3., 4.])
>>> end
tensor([ 10., 10., 10., 10.])
| https://pytorch.org/docs/stable/generated/torch.lerp.html | pytorch docs |
tensor([ 10., 10., 10., 10.])
>>> torch.lerp(start, end, 0.5)
tensor([ 5.5000, 6.0000, 6.5000, 7.0000])
>>> torch.lerp(start, end, torch.full_like(start, 0.5))
tensor([ 5.5000, 6.0000, 6.5000, 7.0000]) | https://pytorch.org/docs/stable/generated/torch.lerp.html | pytorch docs |
torch.Tensor.cfloat
Tensor.cfloat(memory_format=torch.preserve_format) -> Tensor
"self.cfloat()" is equivalent to "self.to(torch.complex64)". See
"to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.cfloat.html | pytorch docs |
torch.Tensor.atanh
Tensor.atanh() -> Tensor
See "torch.atanh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.atanh.html | pytorch docs |
torch.nn.functional.softmax
torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None)
Applies a softmax function.
Softmax is defined as:
\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
It is applied to all slices along dim, and will re-scale them so
that the elements lie in the range [0, 1] and sum to 1.
See "Softmax" for more details.
Parameters:
* input (Tensor) -- input
* **dim** (*int*) -- A dimension along which softmax will be
computed.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is casted
to "dtype" before the operation is performed. This is useful
for preventing data type overflows. Default: None.
Return type:
Tensor
Note:
This function doesn't work directly with NLLLoss, which expects
the Log to be computed between the Softmax and itself. Use
| https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html | pytorch docs |
log_softmax instead (it's faster and has better numerical
properties). | https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html | pytorch docs |
torch.sym_float
torch.sym_float(a)
SymInt-aware utility for float casting.
Parameters:
a (SymInt, SymFloat, or object) -- Object to cast | https://pytorch.org/docs/stable/generated/torch.sym_float.html | pytorch docs |
torch.addr
torch.addr(input, vec1, vec2, *, beta=1, alpha=1, out=None) -> Tensor
Performs the outer-product of vectors "vec1" and "vec2" and adds it
to the matrix "input".
Optional values "beta" and "alpha" are scaling factors on the outer
product between "vec1" and "vec2" and the added matrix "input"
respectively.
\text{out} = \beta\ \text{input} + \alpha\ (\text{vec1} \otimes
\text{vec2})
If "beta" is 0, then "input" will be ignored, and nan and inf
in it will not be propagated.
If "vec1" is a vector of size n and "vec2" is a vector of size
m, then "input" must be broadcastable with a matrix of size (n
\times m) and "out" will be a matrix of size (n \times m).
Parameters:
* input (Tensor) -- matrix to be added
* **vec1** (*Tensor*) -- the first vector of the outer product
* **vec2** (*Tensor*) -- the second vector of the outer product
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.addr.html | pytorch docs |
Keyword Arguments:
* beta (Number, optional) -- multiplier for "input"
(\beta)
* **alpha** (*Number**, **optional*) -- multiplier for
\text{vec1} \otimes \text{vec2} (\alpha)
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> vec1 = torch.arange(1., 4.)
>>> vec2 = torch.arange(1., 3.)
>>> M = torch.zeros(3, 2)
>>> torch.addr(M, vec1, vec2)
tensor([[ 1., 2.],
[ 2., 4.],
[ 3., 6.]])
| https://pytorch.org/docs/stable/generated/torch.addr.html | pytorch docs |
torch.Tensor.index_select
Tensor.index_select(dim, index) -> Tensor
See "torch.index_select()" | https://pytorch.org/docs/stable/generated/torch.Tensor.index_select.html | pytorch docs |
torch.linalg.pinv
torch.linalg.pinv(A, *, atol=None, rtol=None, hermitian=False, out=None) -> Tensor
Computes the pseudoinverse (Moore-Penrose inverse) of a matrix.
The pseudoinverse may be defined algebraically but it is more
computationally convenient to understand it through the SVD
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
If "hermitian"= True, "A" is assumed to be Hermitian if complex
or symmetric if real, but this is not checked internally. Instead,
just the lower triangular part of the matrix is used in the
computations.
The singular values (or the norm of the eigenvalues when
"hermitian"= True) that are below \max(\text{atol}, \sigma_1
\cdot \text{rtol}) threshold are treated as zero and discarded in
the computation, where \sigma_1 is the largest singular value (or
eigenvalue). | https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html | pytorch docs |
eigenvalue).
If "rtol" is not specified and "A" is a matrix of dimensions (m,
n), the relative tolerance is set to be \text{rtol} = \max(m, n)
\varepsilon and \varepsilon is the epsilon value for the dtype of
"A" (see "finfo"). If "rtol" is not specified and "atol" is
specified to be larger than zero then "rtol" is set to zero.
If "atol" or "rtol" is a "torch.Tensor", its shape must be
broadcastable to that of the singular values of "A" as returned by
"torch.linalg.svd()".
Note:
This function uses "torch.linalg.svd()" if "hermitian"*= False*
and "torch.linalg.eigh()" if "hermitian"*= True*. For CUDA
inputs, this function synchronizes that device with the CPU.
Note:
Consider using "torch.linalg.lstsq()" if possible for multiplying
a matrix on the left by the pseudoinverse, as:
torch.linalg.lstsq(A, B).solution == A.pinv() @ B
It is always preferred to use "lstsq()" when possible, as it is
| https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html | pytorch docs |
faster and more numerically stable than computing the
pseudoinverse explicitly.
Note:
This function has NumPy compatible variant *linalg.pinv(A, rcond,
hermitian=False)*. However, use of the positional argument
"rcond" is deprecated in favor of "rtol".
Warning:
This function uses internally "torch.linalg.svd()" (or
"torch.linalg.eigh()" when "hermitian"*= True*), so its
derivative has the same problems as those of these functions. See
the warnings in "torch.linalg.svd()" and "torch.linalg.eigh()"
for more details.
See also:
"torch.linalg.inv()" computes the inverse of a square matrix.
"torch.linalg.lstsq()" computes "A"*.pinv() @ *"B" with a
numerically stable algorithm.
Parameters:
* A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
* **rcond** (*float**, **Tensor**, **optional*) -- [NumPy
Compat]. Alias for "rtol". Default: *None*.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html | pytorch docs |
Keyword Arguments:
* atol (float, Tensor, optional) -- the absolute
tolerance value. When None it's considered to be zero.
Default: None.
* **rtol** (*float**, **Tensor**, **optional*) -- the relative
tolerance value. See above for the value it takes when *None*.
Default: *None*.
* **hermitian** (*bool**, **optional*) -- indicates whether "A"
is Hermitian if complex or symmetric if real. Default:
*False*.
* **out** (*Tensor**, **optional*) -- output tensor. Ignored if
*None*. Default: *None*.
Examples:
>>> A = torch.randn(3, 5)
>>> A
tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],
[-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],
[-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])
>>> torch.linalg.pinv(A)
tensor([[ 0.0600, -0.1933, -0.2090],
[-0.0903, -0.0817, -0.4752],
[-0.7124, -0.1631, -0.2272],
| https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html | pytorch docs |
[-0.7124, -0.1631, -0.2272],
[ 0.1356, 0.3933, -0.5023],
[-0.0308, -0.1725, -0.5216]])
>>> A = torch.randn(2, 6, 3)
>>> Apinv = torch.linalg.pinv(A)
>>> torch.dist(Apinv @ A, torch.eye(3))
tensor(8.5633e-07)
>>> A = torch.randn(3, 3, dtype=torch.complex64)
>>> A = A + A.T.conj() # creates a Hermitian matrix
>>> Apinv = torch.linalg.pinv(A, hermitian=True)
>>> torch.dist(Apinv @ A, torch.eye(3))
tensor(1.0830e-06)
| https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html | pytorch docs |
HingeEmbeddingLoss
class torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean')
Measures the loss given an input tensor x and a labels tensor y
(containing 1 or -1). This is usually used for measuring whether
two inputs are similar or dissimilar, e.g. using the L1 pairwise
distance as x, and is typically used for learning nonlinear
embeddings or semi-supervised learning.
The loss function for n-th sample in the mini-batch is
l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max
\{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}
and the total loss functions is
\ell(x, y) = \begin{cases} \operatorname{mean}(L), &
\text{if reduction} = \text{`mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
\end{cases}
where L = {l_1,\dots,l_N}^\top.
Parameters:
* margin (float, optional) -- Has a default value of
1. | https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html | pytorch docs |
1.
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
| https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html | pytorch docs |
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input: (*) where * means, any number of dimensions. The sum
operation operates over all the elements.
* Target: (*), same shape as the input
* Output: scalar. If "reduction" is "'none'", then same shape as
the input
| https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html | pytorch docs |
torch.set_default_device
torch.set_default_device(device)
Sets the default "torch.Tensor" to be allocated on "device". This
does not affect factory function calls which are called with an
explicit "device" argument. Factory calls will be performed as if
they were passed "device" as an argument.
To only temporarily change the default device instead of setting it
globally, use "with torch.device(device):" instead.
The default device is initially "cpu". If you set the default
tensor device to another device (e.g., "cuda") without a device
index, tensors will be allocated on whatever the current device for
the device type, even after "torch.cuda.set_device()" is called.
Warning:
This function imposes a slight performance cost on every Python
call to the torch API (not just factory functions). If this is
causing problems for you, please comment on
https://github.com/pytorch/pytorch/issues/92701
Parameters: | https://pytorch.org/docs/stable/generated/torch.set_default_device.html | pytorch docs |
Parameters:
device (device or string) -- the device to set as
default
Example:
>>> torch.tensor([1.2, 3]).device
device(type='cpu')
>>> torch.set_default_device('cuda') # current device is 0
>>> torch.tensor([1.2, 3]).device
device(type='cuda', index=0)
>>> torch.set_default_device('cuda:1')
>>> torch.tensor([1.2, 3]).device
device(type='cuda', index=1)
| https://pytorch.org/docs/stable/generated/torch.set_default_device.html | pytorch docs |
torch.nn.functional.max_pool1d
torch.nn.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
Applies a 1D max pooling over an input signal composed of several
input planes.
Note:
The order of "ceil_mode" and "return_indices" is different from
what seen in "MaxPool1d", and will change in a future release.
See "MaxPool1d" for details.
Parameters:
* input -- input tensor of shape (\text{minibatch} ,
\text{in_channels} , iW), minibatch dim optional.
* **kernel_size** -- the size of the window. Can be a single
number or a tuple *(kW,)*
* **stride** -- the stride of the window. Can be a single number
or a tuple *(sW,)*. Default: "kernel_size"
* **padding** -- Implicit negative infinity padding to be added
on both sides, must be >= 0 and <= kernel_size / 2.
* **dilation** -- The stride between elements within a sliding
| https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool1d.html | pytorch docs |
window, must be > 0.
* **ceil_mode** -- If "True", will use *ceil* instead of *floor*
to compute the output shape. This ensures that every element
in the input tensor is covered by a sliding window.
* **return_indices** -- If "True", will return the argmax along
with the max values. Useful for
"torch.nn.functional.max_unpool1d" later
| https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool1d.html | pytorch docs |
torch.Tensor.to_sparse_csc
Tensor.to_sparse_csc() -> Tensor
Convert a tensor to compressed column storage (CSC) format. Except
for strided tensors, only works with 2D tensors. If the "self" is
strided, then the number of dense dimensions could be specified,
and a hybrid CSC tensor will be created, with dense_dim dense
dimensions and self.dim() - 2 - dense_dim batch dimension.
Parameters:
dense_dim (int, optional) -- Number of dense
dimensions of the resulting CSC tensor. This argument should be
used only if "self" is a strided tensor, and must be a value
between 0 and dimension of "self" tensor minus two.
Example:
>>> dense = torch.randn(5, 5)
>>> sparse = dense.to_sparse_csc()
>>> sparse._nnz()
25
>>> dense = torch.zeros(3, 3, 1, 1)
>>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1
>>> dense.to_sparse_csc(dense_dim=2)
tensor(ccol_indices=tensor([0, 1, 2, 3]),
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csc.html | pytorch docs |
tensor(ccol_indices=tensor([0, 1, 2, 3]),
row_indices=tensor([0, 2, 1]),
values=tensor([[[1.]],
[[1.]],
[[1.]]]), size=(3, 3, 1, 1), nnz=3,
layout=torch.sparse_csc)
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csc.html | pytorch docs |
torch.save
torch.save(obj, f, pickle_module=pickle, pickle_protocol=DEFAULT_PROTOCOL, _use_new_zipfile_serialization=True)
Saves an object to a disk file.
See also: Saving and loading tensors
Parameters:
* obj (object) -- saved object
* **f** (*Union**[**str**, **PathLike**, **BinaryIO**,
**IO**[**bytes**]**]*) -- a file-like object (has to implement
write and flush) or a string or os.PathLike object containing
a file name
* **pickle_module** (*Any*) -- module used for pickling metadata
and objects
* **pickle_protocol** (*int*) -- can be specified to override
the default protocol
Note:
A common PyTorch convention is to save tensors using .pt file
extension.
Note:
PyTorch preserves storage sharing across serialization. See
Saving and loading tensors preserves views for more details.
Note:
The 1.6 release of PyTorch switched "torch.save" to use a new
| https://pytorch.org/docs/stable/generated/torch.save.html | pytorch docs |
zipfile-based file format. "torch.load" still retains the ability
to load files in the old format. If for any reason you want
"torch.save" to use the old format, pass the kwarg
"_use_new_zipfile_serialization=False".
-[ Example ]-
Save to file
x = torch.tensor([0, 1, 2, 3, 4])
torch.save(x, 'tensor.pt')
Save to io.BytesIO buffer
buffer = io.BytesIO()
torch.save(x, buffer)
| https://pytorch.org/docs/stable/generated/torch.save.html | pytorch docs |
torch.triu
torch.triu(input, diagonal=0, *, out=None) -> Tensor
Returns the upper triangular part of a matrix (2-D tensor) or batch
of matrices "input", the other elements of the result tensor "out"
are set to 0.
The upper triangular part of the matrix is defined as the elements
on and above the diagonal.
The argument "diagonal" controls which diagonal to consider. If
"diagonal" = 0, all elements on and above the main diagonal are
retained. A positive value excludes just as many diagonals above
the main diagonal, and similarly a negative value includes just as
many diagonals below the main diagonal. The main diagonal are the
set of indices \lbrace (i, i) \rbrace for i \in [0, \min{d_{1},
d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.
Parameters:
* input (Tensor) -- the input tensor.
* **diagonal** (*int**, **optional*) -- the diagonal to consider
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.triu.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(3, 3)
>>> a
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.2072, -1.0680, 0.6602],
[ 0.3480, -0.5211, -0.4573]])
>>> torch.triu(a)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.0000, -1.0680, 0.6602],
[ 0.0000, 0.0000, -0.4573]])
>>> torch.triu(a, diagonal=1)
tensor([[ 0.0000, 0.5207, 2.0049],
[ 0.0000, 0.0000, 0.6602],
[ 0.0000, 0.0000, 0.0000]])
>>> torch.triu(a, diagonal=-1)
tensor([[ 0.2309, 0.5207, 2.0049],
[ 0.2072, -1.0680, 0.6602],
[ 0.0000, -0.5211, -0.4573]])
>>> b = torch.randn(4, 6)
>>> b
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
| https://pytorch.org/docs/stable/generated/torch.triu.html | pytorch docs |
[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]])
>>> torch.triu(b, diagonal=1)
tensor([[ 0.0000, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[ 0.0000, 0.0000, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.0000, 0.0000, 0.0000, -1.0432, 0.9348, -0.4410],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.4798, 0.2830]])
>>> torch.triu(b, diagonal=-1)
tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],
[-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],
[ 0.0000, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],
[ 0.0000, 0.0000, -1.3337, -1.6556, 0.4798, 0.2830]]) | https://pytorch.org/docs/stable/generated/torch.triu.html | pytorch docs |
torch.Tensor.select_scatter
Tensor.select_scatter(src, dim, index) -> Tensor
See "torch.select_scatter()" | https://pytorch.org/docs/stable/generated/torch.Tensor.select_scatter.html | pytorch docs |
torch.linalg.svdvals
torch.linalg.svdvals(A, *, driver=None, out=None) -> Tensor
Computes the singular values of a matrix.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
The singular values are returned in descending order.
Note:
This function is equivalent to NumPy's *linalg.svd(A,
compute_uv=False)*.
Note:
When inputs are on a CUDA device, this function synchronizes that
device with the CPU.
See also:
"torch.linalg.svd()" computes the full singular value
decomposition.
Parameters:
A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
Keyword Arguments:
* driver (str, optional) -- name of the cuSOLVER
method to be used. This keyword argument only works on CUDA | https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html | pytorch docs |
inputs. Available options are: None, gesvd, gesvdj, and
gesvda. Check "torch.linalg.svd()" for details. Default:
None.
* **out** (*Tensor**, **optional*) -- output tensor. Ignored if
*None*. Default: *None*.
Returns:
A real-valued tensor, even when "A" is complex.
Examples:
>>> A = torch.randn(5, 3)
>>> S = torch.linalg.svdvals(A)
>>> S
tensor([2.5139, 2.1087, 1.1066])
>>> torch.dist(S, torch.linalg.svd(A, full_matrices=False).S)
tensor(2.4576e-07)
| https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html | pytorch docs |
AdamW
class torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False, *, maximize=False, foreach=None, capturable=False, differentiable=False)
Implements AdamW algorithm.
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \gamma \text{(lr)}, \: \beta_1,
\beta_2 \text{(betas)}, \: \theta_0 \text{(params)}, \:
f(\theta) \text{(objective)}, \: \epsilon \text{
(epsilon)} \\
&\hspace{13mm} \lambda \text{(weight decay)}, \:
\textit{amsgrad}, \: \textit{maximize}
\\ &\textbf{initialize} : m_0 \leftarrow 0 \text{ (first
moment)}, v_0 \leftarrow 0 \text{ ( second moment)}, \:
\widehat{v_0}^{max}\leftarrow 0 \\[-1.ex]
&\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\
| https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
\textbf{do} \
&\hspace{5mm}\textbf{if} \: \textit{maximize}:
\ &\hspace{10mm}g_t \leftarrow
-\nabla_{\theta} f_t (\theta_{t-1}) \
&\hspace{5mm}\textbf{else}
\ &\hspace{10mm}g_t \leftarrow \nabla_{\theta}
f_t (\theta_{t-1}) \ &\hspace{5mm} \theta_t
\leftarrow \theta_{t-1} - \gamma \lambda \theta_{t-1} \
&\hspace{5mm}m_t \leftarrow \beta_1 m_{t-1} + (1 -
\beta_1) g_t \ &\hspace{5mm}v_t
\leftarrow \beta_2 v_{t-1} + (1-\beta_2) g^2_t \
&\hspace{5mm}\widehat{m_t} \leftarrow m_t/\big(1-\beta_1^t
\big) \ &\hspace{5mm}\widehat{v_t}
\leftarrow v_t/\big(1-\beta_2^t \big) \
&\hspace{5mm}\textbf{if} \: amsgrad
\ &\hspace{10mm}\widehat{v_t}^{max} \leftarrow
\mathrm{max}(\widehat{v_t}^{max}, \widehat{v_t}) | https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
\ &\hspace{10mm}\theta_t \leftarrow \theta_t - \gamma
\widehat{m_t}/ \big(\sqrt{\widehat{v_t}^{max}} +
\epsilon \big) \
&\hspace{5mm}\textbf{else}
\ &\hspace{10mm}\theta_t \leftarrow \theta_t - \gamma
\widehat{m_t}/ \big(\sqrt{\widehat{v_t}} + \epsilon
\big) \
&\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to Decoupled
Weight Decay Regularization.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
1e-3)
* **betas** (*Tuple**[**float**, **float**]**, **optional*) --
coefficients used for computing running averages of gradient
| https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
and its square (default: (0.9, 0.999))
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **weight_decay** (*float**, **optional*) -- weight decay
coefficient (default: 1e-2)
* **amsgrad** (*bool**, **optional*) -- whether to use the
AMSGrad variant of this algorithm from the paper On the
Convergence of Adam and Beyond (default: False)
* **maximize** (*bool**, **optional*) -- maximize the params
based on the objective, instead of minimizing (default: False)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used. If unspecified by the
user (so foreach is None), we will try to use foreach over the
for-loop implementation on CUDA, since it is usually
significantly more performant. (default: None)
* **capturable** (*bool**, **optional*) -- whether this instance
| https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
is safe to capture in a CUDA graph. Passing True can impair
ungraphed performance, so if you don't intend to graph capture
this instance, leave it False (default: False)
* **differentiable** (*bool**, **optional*) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict) | https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
| https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
Parameters:
set_to_none (bool) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html | pytorch docs |
torch.Tensor.cummin
Tensor.cummin(dim)
See "torch.cummin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cummin.html | pytorch docs |
FuseCustomConfig
class torch.ao.quantization.fx.custom_config.FuseCustomConfig
Custom configuration for "fuse_fx()".
Example usage:
fuse_custom_config = FuseCustomConfig().set_preserved_attributes(["attr1", "attr2"])
classmethod from_dict(fuse_custom_config_dict)
Create a "ConvertCustomConfig" from a dictionary with the
following items:
"preserved_attributes": a list of attributes that persist
even if they are not used in "forward"
This function is primarily for backward compatibility and may be
removed in the future.
Return type:
*FuseCustomConfig*
set_preserved_attributes(attributes)
Set the names of the attributes that will persist in the graph
module even if they are not used in the model's "forward"
method.
Return type:
*FuseCustomConfig*
to_dict()
Convert this "FuseCustomConfig" to a dictionary with the items
described in "from_dict()".
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.FuseCustomConfig.html | pytorch docs |
described in "from_dict()".
Return type:
*Dict*[str, *Any*]
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.FuseCustomConfig.html | pytorch docs |
Adam
class torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False, *, foreach=None, maximize=False, capturable=False, differentiable=False, fused=None)
Implements Adam algorithm.
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \gamma \text{ (lr)}, \beta_1,
\beta_2 \text{ (betas)},\theta_0 \text{
(params)},f(\theta) \text{ (objective)} \\
&\hspace{13mm} \lambda \text{ (weight decay)}, \:
\textit{amsgrad}, \:\textit{maximize}
\\ &\textbf{initialize} : m_0 \leftarrow 0 \text{ ( first
moment)}, v_0\leftarrow 0 \text{ (second moment)},\:
\widehat{v_0}^{max}\leftarrow 0\\[-1.ex]
&\rule{110mm}{0.4pt}
\\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \:
\textbf{do} \\
&\hspace{5mm}\textbf{if} \: \textit{maximize}:
\\ &\hspace{10mm}g_t \leftarrow
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
-\nabla_{\theta} f_t (\theta_{t-1}) \
&\hspace{5mm}\textbf{else}
\ &\hspace{10mm}g_t \leftarrow \nabla_{\theta}
f_t (\theta_{t-1}) \ &\hspace{5mm}\textbf{if} \:
\lambda \neq 0 \
&\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1}
\ &\hspace{5mm}m_t \leftarrow \beta_1 m_{t-1}
+ (1 - \beta_1) g_t \ &\hspace{5mm}v_t
\leftarrow \beta_2 v_{t-1} + (1-\beta_2) g^2_t \
&\hspace{5mm}\widehat{m_t} \leftarrow m_t/\big(1-\beta_1^t
\big) \ &\hspace{5mm}\widehat{v_t}
\leftarrow v_t/\big(1-\beta_2^t \big) \
&\hspace{5mm}\textbf{if} \: amsgrad
\ &\hspace{10mm}\widehat{v_t}^{max} \leftarrow
\mathrm{max}(\widehat{v_t}^{max}, \widehat{v_t})
\ &\hspace{10mm}\theta_t \leftarrow \theta_{t-1} - \gamma | https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
\widehat{m_t}/ \big(\sqrt{\widehat{v_t}^{max}} +
\epsilon \big) \
&\hspace{5mm}\textbf{else}
\ &\hspace{10mm}\theta_t \leftarrow \theta_{t-1} - \gamma
\widehat{m_t}/ \big(\sqrt{\widehat{v_t}} + \epsilon
\big) \
&\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to Adam: A
Method for Stochastic Optimization.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
1e-3)
* **betas** (*Tuple**[**float**, **float**]**, **optional*) --
coefficients used for computing running averages of gradient
and its square (default: (0.9, 0.999))
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
and its square (default: (0.9, 0.999))
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **weight_decay** (*float**, **optional*) -- weight decay (L2
penalty) (default: 0)
* **amsgrad** (*bool**, **optional*) -- whether to use the
AMSGrad variant of this algorithm from the paper On the
Convergence of Adam and Beyond (default: False)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used (default: None)
* **maximize** (*bool**, **optional*) -- maximize the params
based on the objective, instead of minimizing (default: False)
* **capturable** (*bool**, **optional*) -- whether this instance
is safe to capture in a CUDA graph. Passing True can impair
ungraphed performance, so if you don't intend to graph capture
this instance, leave it False (default: False)
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
differentiable (bool, optional) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
fused (bool, optional) -- whether the fused
implementation (CUDA only) is used. Currently,
torch.float64, torch.float32, torch.float16, and
torch.bfloat16 are supported. Since the fused implementation
is usually significantly faster than the for-loop
implementation, we try to use it whenever possible (all
parameters are on CUDA and are of a supported type). Else, we
continue with the for-loop implementation. (default: None)
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
| https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.Adam.html | pytorch docs |
torch.nn.utils.skip_init
torch.nn.utils.skip_init(module_cls, args, *kwargs)
Given a module class object and args / kwargs, instantiates the
module without initializing parameters / buffers. This can be
useful if initialization is slow or if custom initialization will
be performed, making the default initialization unnecessary. There
are some caveats to this, due to the way this function is
implemented:
The module must accept a device arg in its constructor that is
passed to any parameters or buffers created during construction.
The module must not perform any computation on parameters in its
constructor except initialization (i.e. functions from
"torch.nn.init").
If these conditions are satisfied, the module can be instantiated
with parameter / buffer values uninitialized, as if having been
created using "torch.empty()".
Parameters:
* module_cls -- Class object; should be a subclass of
"torch.nn.Module" | https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html | pytorch docs |
"torch.nn.Module"
* **args** -- args to pass to the module's constructor
* **kwargs** -- kwargs to pass to the module's constructor
Returns:
Instantiated module with uninitialized parameters / buffers
Example:
>>> import torch
>>> m = torch.nn.utils.skip_init(torch.nn.Linear, 5, 1)
>>> m.weight
Parameter containing:
tensor([[0.0000e+00, 1.5846e+29, 7.8307e+00, 2.5250e-29, 1.1210e-44]],
requires_grad=True)
>>> m2 = torch.nn.utils.skip_init(torch.nn.Linear, in_features=6, out_features=1)
>>> m2.weight
Parameter containing:
tensor([[-1.4677e+24, 4.5915e-41, 1.4013e-45, 0.0000e+00, -1.4677e+24,
4.5915e-41]], requires_grad=True)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html | pytorch docs |
QConfig
class torch.quantization.qconfig.QConfig(activation, weight)
Describes how to quantize a layer or a part of the network by
providing settings (observer classes) for activations and weights
respectively.
Note that QConfig needs to contain observer classes (like
MinMaxObserver) or a callable that returns instances on invocation,
not the concrete observer instances themselves. Quantization
preparation function will instantiate observers multiple times for
each of the layers.
Observer classes have usually reasonable default arguments, but
they can be overwritten with with_args method (that behaves like
functools.partial):
my_qconfig = QConfig(
activation=MinMaxObserver.with_args(dtype=torch.qint8),
weight=default_observer.with_args(dtype=torch.qint8))
| https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.QConfig.html | pytorch docs |
torch.nn.functional.gelu
torch.nn.functional.gelu(input, approximate='none') -> Tensor
When the approximate argument is 'none', it applies element-wise
the function \text{GELU}(x) = x * \Phi(x)
where \Phi(x) is the Cumulative Distribution Function for Gaussian
Distribution.
When the approximate argument is 'tanh', Gelu is estimated with
\text{GELU}(x) = 0.5 * x * (1 + \text{Tanh}(\sqrt(2 / \pi) * (x
+ 0.044715 * x^3)))
See Gaussian Error Linear Units (GELUs). | https://pytorch.org/docs/stable/generated/torch.nn.functional.gelu.html | pytorch docs |
torch.linalg.ldl_factor
torch.linalg.ldl_factor(A, *, hermitian=False, out=None)
Computes a compact representation of the LDL factorization of a
Hermitian or symmetric (possibly indefinite) matrix.
When "A" is complex valued it can be Hermitian ("hermitian"=
True) or symmetric ("hermitian"= False).
The factorization is of the form the form A = L D L^T. If
"hermitian" is True then transpose operation is the conjugate
transpose.
L (or U) and D are stored in compact form in "LD". They follow the
format specified by LAPACK's sytrf function. These tensors may be
used in "torch.linalg.ldl_solve()" to solve linear systems.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
Note:
When inputs are on a CUDA device, this function synchronizes that
| https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html | pytorch docs |
device with the CPU. For a version of this function that does not
synchronize, see "torch.linalg.ldl_factor_ex()".
Parameters:
A (Tensor) -- tensor of shape (*, n, n) where * is zero or
more batch dimensions consisting of symmetric or Hermitian
matrices. (*, n, n) where *** is one or more batch dimensions.
Keyword Arguments:
* hermitian (bool, optional) -- whether to consider
the input to be Hermitian or symmetric. For real-valued
matrices, this switch has no effect. Default: False.
* **out** (*tuple**, **optional*) -- tuple of two tensors to
write the output to. Ignored if *None*. Default: *None*.
Returns:
A named tuple (LD, pivots).
Examples:
>>> A = torch.randn(3, 3)
>>> A = A @ A.mT # make symmetric
>>> A
tensor([[7.2079, 4.2414, 1.9428],
[4.2414, 3.4554, 0.3264],
[1.9428, 0.3264, 1.3823]])
>>> LD, pivots = torch.linalg.ldl_factor(A)
| https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.