text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.Tensor.bitwise_and_
Tensor.bitwise_and_() -> Tensor
In-place version of "bitwise_and()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_and_.html | pytorch docs |
torch.chain_matmul
torch.chain_matmul(*matrices, out=None)
Returns the matrix product of the N 2-D tensors. This product is
efficiently computed using the matrix chain order algorithm which
selects the order in which incurs the lowest cost in terms of
arithmetic operations ([CLRS]). Note that since this is a function
to compute the product, N needs to be greater than or equal to 2;
if equal to 2 then a trivial matrix-matrix product is returned. If
N is 1, then this is a no-op - the original matrix is returned as
is.
Warning:
"torch.chain_matmul()" is deprecated and will be removed in a
future PyTorch release. Use "torch.linalg.multi_dot()" instead,
which accepts a list of two or more tensors rather than multiple
arguments.
Parameters:
* matrices (Tensors...) -- a sequence of 2 or more 2-D
tensors whose product is to be determined.
* **out** (*Tensor**, **optional*) -- the output tensor. Ignored
| https://pytorch.org/docs/stable/generated/torch.chain_matmul.html | pytorch docs |
if "out" = "None".
Returns:
if the i^{th} tensor was of dimensions p_{i} \times p_{i + 1},
then the product would be of dimensions p_{1} \times p_{N + 1}.
Return type:
Tensor
Example:
>>> a = torch.randn(3, 4)
>>> b = torch.randn(4, 5)
>>> c = torch.randn(5, 6)
>>> d = torch.randn(6, 7)
>>> # will raise a deprecation warning
>>> torch.chain_matmul(a, b, c, d)
tensor([[ -2.3375, -3.9790, -4.1119, -6.6577, 9.5609, -11.5095, -3.2614],
[ 21.4038, 3.3378, -8.4982, -5.2457, -10.2561, -2.4684, 2.7163],
[ -0.9647, -5.8917, -2.3213, -5.2284, 12.8615, -12.2816, -2.5095]])
| https://pytorch.org/docs/stable/generated/torch.chain_matmul.html | pytorch docs |
torch.foreach_log2
torch.foreach_log2(self: List[Tensor]) -> None
Apply "torch.log2()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_log2_.html | pytorch docs |
torch.cuda.utilization
torch.cuda.utilization(device=None)
Returns the percent of time over the past sample period during
which one or more kernels was executing on the GPU as given by
nvidia-smi.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
int
Warning: Each sample period may be between 1 second and 1/6 second,
depending on the product being queried. | https://pytorch.org/docs/stable/generated/torch.cuda.utilization.html | pytorch docs |
torch.get_num_threads
torch.get_num_threads() -> int
Returns the number of threads used for parallelizing CPU operations | https://pytorch.org/docs/stable/generated/torch.get_num_threads.html | pytorch docs |
torch.Tensor.hsplit
Tensor.hsplit(split_size_or_sections) -> List of Tensors
See "torch.hsplit()" | https://pytorch.org/docs/stable/generated/torch.Tensor.hsplit.html | pytorch docs |
CrossEntropyLoss
class torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)
This criterion computes the cross entropy loss between input logits
and target.
It is useful when training a classification problem with C
classes. If provided, the optional argument "weight" should be a 1D
Tensor assigning weight to each of the classes. This is
particularly useful when you have an unbalanced training set.
The input is expected to contain the unnormalized logits for each
class (which do not need to be positive or sum to 1, in general).
input has to be a Tensor of size (C) for unbatched input,
(minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K \geq 1
for the K-dimensional case. The last being useful for higher
dimension inputs, such as computing cross entropy loss per-pixel
for 2D images.
The target that this criterion expects should contain either: | https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
Class indices in the range [0, C) where C is the number of
classes; if ignore_index is specified, this loss also accepts
this class index (this index may not necessarily be in the class
range). The unreduced (i.e. with "reduction" set to "'none'")
loss for this case can be described as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n}
\log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot
\mathbb{1}\{y_n \not= \text{ignore\_index}\}
where x is the input, y is the target, w is the weight, C is the
number of classes, and N spans the minibatch dimension as well as
d_1, ..., d_k for the K-dimensional case. If "reduction" is not
"'none'" (default "'mean'"), then
\ell(x, y) = \begin{cases} \sum_{n=1}^N
\frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not=
\text{ignore\_index}\}} l_n, & \text{if reduction} =
\text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if
| https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
reduction} = \text{`sum'.} \end{cases}
Note that this case is equivalent to the combination of
"LogSoftmax" and "NLLLoss".
Probabilities for each class; useful when labels beyond a single
class per minibatch item are required, such as for blended
labels, label smoothing, etc. The unreduced (i.e. with
"reduction" set to "'none'") loss for this case can be described
as:
\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = -
\sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\sum_{i=1}^C
\exp(x_{n,i})} y_{n,c}
where x is the input, y is the target, w is the weight, C is the
number of classes, and N spans the minibatch dimension as well as
d_1, ..., d_k for the K-dimensional case. If "reduction" is not
"'none'" (default "'mean'"), then
\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, &
\text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n,
| https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
& \text{if reduction} = \text{`sum'.} \end{cases}
Note:
The performance of this criterion is generally better when
*target* contains class indices, as this allows for optimized
computation. Consider providing *target* as class probabilities
only when a single class label per minibatch item is too
restrictive.
Parameters:
* weight (Tensor, optional) -- a manual rescaling
weight given to each class. If given, has to be a Tensor of
size C
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **ignore_index** (*int**, **optional*) -- Specifies a target
| https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
value that is ignored and does not contribute to the input
gradient. When "size_average" is "True", the loss is averaged
over non-ignored targets. Note that "ignore_index" is only
applicable when the target contains class indices.
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the weighted
mean of the output is taken, "'sum'": the output will be
summed. Note: "size_average" and "reduce" are in the process
of being deprecated, and in the meantime, specifying either of
| https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
those two args will override "reduction". Default: "'mean'"
* **label_smoothing** (*float**, **optional*) -- A float in
[0.0, 1.0]. Specifies the amount of smoothing when computing
the loss, where 0.0 means no smoothing. The targets become a
mixture of the original ground truth and a uniform
distribution as described in Rethinking the Inception
Architecture for Computer Vision. Default: 0.0.
Shape:
* Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K
\geq 1 in the case of K-dimensional loss.
* Target: If containing class indices, shape (), (N) or (N, d_1,
d_2, ..., d_K) with K \geq 1 in the case of K-dimensional loss
where each value should be between [0, C). If containing class
probabilities, same shape as the input and each value should
be between [0, 1].
* Output: If reduction is 'none', shape (), (N) or (N, d_1, d_2,
| https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
..., d_K) with K \geq 1 in the case of K-dimensional loss,
depending on the shape of the input. Otherwise, scalar.
where:
\begin{aligned} C ={} & \text{number of classes} \\ N
={} & \text{batch size} \\ \end{aligned}
Examples:
>>> # Example of target with class indices
>>> loss = nn.CrossEntropyLoss()
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.empty(3, dtype=torch.long).random_(5)
>>> output = loss(input, target)
>>> output.backward()
>>>
>>> # Example of target with class probabilities
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5).softmax(dim=1)
>>> output = loss(input, target)
>>> output.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html | pytorch docs |
prepare_qat
class torch.quantization.prepare_qat(model, mapping=None, inplace=False)
Prepares a copy of the model for quantization calibration or
quantization-aware training and converts it to quantized version.
Quantization configuration should be assigned preemptively to
individual submodules in .qconfig attribute.
Parameters:
* model -- input model to be modified in-place
* **mapping** -- dictionary that maps float modules to quantized
modules to be replaced.
* **inplace** -- carry out model transformations in-place, the
original module is mutated
| https://pytorch.org/docs/stable/generated/torch.quantization.prepare_qat.html | pytorch docs |
torch.Tensor.acos_
Tensor.acos_() -> Tensor
In-place version of "acos()" | https://pytorch.org/docs/stable/generated/torch.Tensor.acos_.html | pytorch docs |
torch.nn.functional.threshold
torch.nn.functional.threshold(input, threshold, value, inplace=False)
Thresholds each element of the input Tensor.
See "Threshold" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.threshold.html | pytorch docs |
torch.xlogy
torch.xlogy(input, other, *, out=None) -> Tensor
Alias for "torch.special.xlogy()". | https://pytorch.org/docs/stable/generated/torch.xlogy.html | pytorch docs |
torch.sum
torch.sum(input, *, dtype=None) -> Tensor
Returns the sum of all elements in the "input" tensor.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
dtype ("torch.dtype", optional) -- the desired data type of
returned tensor. If specified, the input tensor is casted to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 0.1133, -0.9567, 0.2958]])
>>> torch.sum(a)
tensor(-0.5475)
torch.sum(input, dim, keepdim=False, *, dtype=None) -> Tensor
Returns the sum of each row of the "input" tensor in the given
dimension "dim". If "dim" is a list of dimensions, reduce over all
of them.
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1. | https://pytorch.org/docs/stable/generated/torch.sum.html | pytorch docs |
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- the
dimension or dimensions to reduce. If "None", all dimensions
are reduced.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
dtype ("torch.dtype", optional) -- the desired data type of
returned tensor. If specified, the input tensor is casted to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0569, -0.2475, 0.0737, -0.3429],
[-0.2993, 0.9138, 0.9337, -1.6864],
[ 0.1132, 0.7892, -0.1003, 0.5688],
[ 0.3637, -0.9906, -0.4752, -1.5197]])
| https://pytorch.org/docs/stable/generated/torch.sum.html | pytorch docs |
torch.sum(a, 1)
tensor([-0.4598, -0.1381, 1.3708, -2.6217])
>>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)
>>> torch.sum(b, (2, 1))
tensor([ 435., 1335., 2235., 3135.])
| https://pytorch.org/docs/stable/generated/torch.sum.html | pytorch docs |
torch.Tensor.is_cuda
Tensor.is_cuda
Is "True" if the Tensor is stored on the GPU, "False" otherwise. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_cuda.html | pytorch docs |
torch.autograd.grad
torch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, is_grads_batched=False)
Computes and returns the sum of gradients of outputs with respect
to the inputs.
"grad_outputs" should be a sequence of length matching "output"
containing the "vector" in vector-Jacobian product, usually the
pre-computed gradients w.r.t. each of the outputs. If an output
doesn't require_grad, then the gradient can be "None").
Note:
If you run any forward ops, create "grad_outputs", and/or call
"grad" in a user-specified CUDA stream context, see Stream
semantics of backward passes.
Note:
"only_inputs" argument is deprecated and is ignored now (defaults
to "True"). To accumulate gradient for other parts of the graph,
please use "torch.autograd.backward".
Parameters:
* outputs (sequence of Tensor) -- outputs of the | https://pytorch.org/docs/stable/generated/torch.autograd.grad.html | pytorch docs |
differentiated function.
* **inputs** (*sequence of Tensor*) -- Inputs w.r.t. which the
gradient will be returned (and not accumulated into ".grad").
* **grad_outputs** (*sequence of Tensor*) -- The "vector" in the
vector-Jacobian product. Usually gradients w.r.t. each output.
None values can be specified for scalar Tensors or ones that
don't require grad. If a None value would be acceptable for
all grad_tensors, then this argument is optional. Default:
None.
* **retain_graph** (*bool**, **optional*) -- If "False", the
graph used to compute the grad will be freed. Note that in
nearly all cases setting this option to "True" is not needed
and often can be worked around in a much more efficient way.
Defaults to the value of "create_graph".
* **create_graph** (*bool**, **optional*) -- If "True", graph of
the derivative will be constructed, allowing to compute higher
| https://pytorch.org/docs/stable/generated/torch.autograd.grad.html | pytorch docs |
order derivative products. Default: "False".
* **allow_unused** (*bool**, **optional*) -- If "False",
specifying inputs that were not used when computing outputs
(and therefore their grad is always zero) is an error.
Defaults to "False".
* **is_grads_batched** (*bool**, **optional*) -- If "True", the
first dimension of each tensor in "grad_outputs" will be
interpreted as the batch dimension. Instead of computing a
single vector-Jacobian product, we compute a batch of vector-
Jacobian products for each "vector" in the batch. We use the
vmap prototype feature as the backend to vectorize calls to
the autograd engine so that this computation can be performed
in a single call. This should lead to performance improvements
when compared to manually looping and performing backward
multiple times. Note that due to this feature being
experimental, there may be performance cliffs. Please use
| https://pytorch.org/docs/stable/generated/torch.autograd.grad.html | pytorch docs |
"torch._C._debug_only_display_vmap_fallback_warnings(True)" to
show any performance warnings and file an issue on github if
warnings exist for your use case. Defaults to "False".
Return type:
Tuple[Tensor, ...] | https://pytorch.org/docs/stable/generated/torch.autograd.grad.html | pytorch docs |
torch.Tensor.index_add_
Tensor.index_add_(dim, index, source, *, alpha=1) -> Tensor
Accumulate the elements of "alpha" times "source" into the "self"
tensor by adding to the indices in the order given in "index". For
example, if "dim == 0", "index[i] == j", and "alpha=-1", then the
"i"th row of "source" is subtracted from the "j"th row of "self".
The "dim"th dimension of "source" must have the same size as the
length of "index" (which must be a vector), and all other
dimensions must match "self", or an error will be raised.
For a 3-D tensor the output is given as:
self[index[i], :, :] += alpha * src[i, :, :] # if dim == 0
self[:, index[i], :] += alpha * src[:, i, :] # if dim == 1
self[:, :, index[i]] += alpha * src[:, :, i] # if dim == 2
Note:
This operation may behave nondeterministically when given tensors
on a CUDA device. See Reproducibility for more information.
Parameters: | https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html | pytorch docs |
Parameters:
* dim (int) -- dimension along which to index
* **index** (*Tensor*) -- indices of "source" to select from,
should have dtype either *torch.int64* or *torch.int32*
* **source** (*Tensor*) -- the tensor containing values to add
Keyword Arguments:
alpha (Number) -- the scalar multiplier for "source"
Example:
>>> x = torch.ones(5, 3)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2])
>>> x.index_add_(0, index, t)
tensor([[ 2., 3., 4.],
[ 1., 1., 1.],
[ 8., 9., 10.],
[ 1., 1., 1.],
[ 5., 6., 7.]])
>>> x.index_add_(0, index, t, alpha=-1)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html | pytorch docs |
torch.Tensor.ceil
Tensor.ceil() -> Tensor
See "torch.ceil()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ceil.html | pytorch docs |
torch.Tensor.bfloat16
Tensor.bfloat16(memory_format=torch.preserve_format) -> Tensor
"self.bfloat16()" is equivalent to "self.to(torch.bfloat16)". See
"to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.bfloat16.html | pytorch docs |
torch.Tensor.matmul
Tensor.matmul(tensor2) -> Tensor
See "torch.matmul()" | https://pytorch.org/docs/stable/generated/torch.Tensor.matmul.html | pytorch docs |
torch.Tensor.adjoint
Tensor.adjoint() -> Tensor
Alias for "adjoint()" | https://pytorch.org/docs/stable/generated/torch.Tensor.adjoint.html | pytorch docs |
torch.tensordot
torch.tensordot(a, b, dims=2, out=None)
Returns a contraction of a and b over multiple dimensions.
"tensordot" implements a generalized matrix product.
Parameters:
* a (Tensor) -- Left tensor to contract
* **b** (*Tensor*) -- Right tensor to contract
* **dims** (*int** or **Tuple**[**List**[**int**]**,
**List**[**int**]**] or **List**[**List**[**int**]**]
**containing two lists** or **Tensor*) -- number of dimensions
to contract or explicit lists of dimensions for "a" and "b"
respectively
When called with a non-negative integer argument "dims" = d, and
the number of dimensions of "a" and "b" is m and n, respectively,
"tensordot()" computes
r_{i_0,...,i_{m-d}, i_d,...,i_n} = \sum_{k_0,...,k_{d-1}}
a_{i_0,...,i_{m-d},k_0,...,k_{d-1}} \times b_{k_0,...,k_{d-1},
i_d,...,i_n}.
When called with "dims" of the list form, the given dimensions will | https://pytorch.org/docs/stable/generated/torch.tensordot.html | pytorch docs |
be contracted in place of the last d of "a" and the first d of b.
The sizes in these dimensions must match, but "tensordot()" will
deal with broadcasted dimensions.
Examples:
>>> a = torch.arange(60.).reshape(3, 4, 5)
>>> b = torch.arange(24.).reshape(4, 3, 2)
>>> torch.tensordot(a, b, dims=([1, 0], [0, 1]))
tensor([[4400., 4730.],
[4532., 4874.],
[4664., 5018.],
[4796., 5162.],
[4928., 5306.]])
>>> a = torch.randn(3, 4, 5, device='cuda')
>>> b = torch.randn(4, 5, 6, device='cuda')
>>> c = torch.tensordot(a, b, dims=2).cpu()
tensor([[ 8.3504, -2.5436, 6.2922, 2.7556, -1.0732, 3.2741],
[ 3.3161, 0.0704, 5.0187, -0.4079, -4.3126, 4.8744],
[ 0.8223, 3.9445, 3.2168, -0.2400, 3.4117, 1.7780]])
>>> a = torch.randn(3, 5, 4, 6)
>>> b = torch.randn(6, 4, 5, 3)
>>> torch.tensordot(a, b, dims=([2, 1, 3], [1, 2, 0]))
| https://pytorch.org/docs/stable/generated/torch.tensordot.html | pytorch docs |
tensor([[ 7.7193, -2.4867, -10.3204],
[ 1.5513, -14.4737, -6.5113],
[ -0.2850, 4.2573, -3.5997]]) | https://pytorch.org/docs/stable/generated/torch.tensordot.html | pytorch docs |
torch.mvlgamma
torch.mvlgamma(input, p, *, out=None) -> Tensor
Alias for "torch.special.multigammaln()". | https://pytorch.org/docs/stable/generated/torch.mvlgamma.html | pytorch docs |
torch.signal.windows.nuttall
torch.signal.windows.nuttall(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the minimum 4-term Blackman-Harris window according to
Nuttall.
w_n = 1 - 0.36358 \cos{(z_n)} + 0.48917 \cos{(2z_n)} - 0.13659
\cos{(3z_n)} + 0.01064 \cos{(4z_n)}
where "z_n = 2 Ï n/ M".
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* sym (bool, optional) -- If False, returns a
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
| https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html | pytorch docs |
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
References:
- A. Nuttall, âSome windows with very good sidelobe behavior,â
IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29, no. 1, pp. 84-91,
Feb 1981. https://doi.org/10.1109/TASSP.1981.1163506
| https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html | pytorch docs |
Heinzel G. et al., âSpectrum and spectral density estimation by the Discrete Fourier transform (DFT),
including a comprehensive list of window functions and some new flat-top windowsâ,
February 15, 2002 https://holometer.fnal.gov/GH_FFT.pdf
Examples:
>>> # Generates a symmetric Nutall window.
>>> torch.signal.windows.general_hamming(5, sym=True)
tensor([3.6280e-04, 2.2698e-01, 1.0000e+00, 2.2698e-01, 3.6280e-04])
>>> # Generates a periodic Nuttall window.
>>> torch.signal.windows.general_hamming(5, sym=False)
tensor([3.6280e-04, 1.1052e-01, 7.9826e-01, 7.9826e-01, 1.1052e-01])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html | pytorch docs |
Upsample
class torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D
(volumetric) data.
The input data is assumed to be of the form minibatch x channels x
[optional depth] x [optional height] x width. Hence, for spatial
inputs, we expect a 4D Tensor and for volumetric inputs, we expect
a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and
linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input
Tensor, respectively.
One can either give a "scale_factor" or the target output "size" to
calculate the output size. (You cannot give both, as it is
ambiguous)
Parameters:
* size (int or Tuple[int] or Tuple[int,
int] or Tuple[int, int, int],
optional) -- output spatial sizes
* **scale_factor** (*float** or **Tuple**[**float**] or
| https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html | pytorch docs |
Tuple[float, float] or Tuple[float,
float, float], optional*) -- multiplier for
spatial size. Has to match input size if it is a tuple.
* **mode** (*str**, **optional*) -- the upsampling algorithm:
one of "'nearest'", "'linear'", "'bilinear'", "'bicubic'" and
"'trilinear'". Default: "'nearest'"
* **align_corners** (*bool**, **optional*) -- if "True", the
corner pixels of the input and output tensors are aligned, and
thus preserving the values at those pixels. This only has
effect when "mode" is "'linear'", "'bilinear'", "'bicubic'",
or "'trilinear'". Default: "False"
* **recompute_scale_factor** (*bool**, **optional*) -- recompute
the scale_factor for use in the interpolation calculation. If
*recompute_scale_factor* is "True", then *scale_factor* must
be passed in and *scale_factor* is used to compute the output
| https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html | pytorch docs |
size. The computed output size will be used to infer new
scales for the interpolation. Note that when scale_factor is
floating-point, it may differ from the recomputed
scale_factor due to rounding and precision issues. If
recompute_scale_factor is "False", then size or
scale_factor will be used directly for interpolation.
Shape:
* Input: (N, C, W_{in}), (N, C, H_{in}, W_{in}) or (N, C,
D_{in}, H_{in}, W_{in})
* Output: (N, C, W_{out}), (N, C, H_{out}, W_{out}) or (N, C,
D_{out}, H_{out}, W_{out}), where
D_{out} = \left\lfloor D_{in} \times \text{scale\_factor}
\right\rfloor
H_{out} = \left\lfloor H_{in} \times \text{scale\_factor}
\right\rfloor
W_{out} = \left\lfloor W_{in} \times \text{scale\_factor}
\right\rfloor
Warning:
With "align_corners = True", the linearly interpolating modes
(*linear*, *bilinear*, *bicubic*, and *trilinear*) don't
| https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html | pytorch docs |
proportionally align the output and input pixels, and thus the
output values can depend on the input size. This was the default
behavior for these modes up to version 0.3.1. Since then, the
default behavior is "align_corners = False". See below for
concrete examples on how this affects the outputs.
Note:
If you want downsampling/general resizing, you should use
"interpolate()".
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)
>>> input
tensor([[[[1., 2.],
[3., 4.]]]])
>>> m = nn.Upsample(scale_factor=2, mode='nearest')
>>> m(input)
tensor([[[[1., 1., 2., 2.],
[1., 1., 2., 2.],
[3., 3., 4., 4.],
[3., 3., 4., 4.]]]])
>>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False
>>> m(input)
tensor([[[[1.0000, 1.2500, 1.7500, 2.0000],
[1.5000, 1.7500, 2.2500, 2.5000],
| https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html | pytorch docs |
[1.5000, 1.7500, 2.2500, 2.5000],
[2.5000, 2.7500, 3.2500, 3.5000],
[3.0000, 3.2500, 3.7500, 4.0000]]]])
>>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
>>> m(input)
tensor([[[[1.0000, 1.3333, 1.6667, 2.0000],
[1.6667, 2.0000, 2.3333, 2.6667],
[2.3333, 2.6667, 3.0000, 3.3333],
[3.0000, 3.3333, 3.6667, 4.0000]]]])
>>> # Try scaling the same data in a larger tensor
>>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3)
>>> input_3x3[:, :, :2, :2].copy_(input)
tensor([[[[1., 2.],
[3., 4.]]]])
>>> input_3x3
tensor([[[[1., 2., 0.],
[3., 4., 0.],
[0., 0., 0.]]]])
>>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False
>>> # Notice that values in top left corner are the same with the small input (except at boundary)
>>> m(input_3x3)
| https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html | pytorch docs |
m(input_3x3)
tensor([[[[1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000],
[1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000],
[2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000],
[2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000],
[0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
>>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
>>> # Notice that values in top left corner are now changed
>>> m(input_3x3)
tensor([[[[1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000],
[1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000],
[2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000],
[2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000],
[1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html | pytorch docs |
Conv3d
class torch.ao.nn.quantized.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
Applies a 3D convolution over a quantized input signal composed of
several quantized input planes.
For details on input arguments, parameters, and implementation see
"Conv3d".
Note:
Only *zeros* is supported for the "padding_mode" argument.
Note:
Only *torch.quint8* is supported for the input data type.
Variables:
* weight (Tensor) -- packed tensor derived from the
learnable weight parameter.
* **scale** (*Tensor*) -- scalar for the output scale
* **zero_point** (*Tensor*) -- scalar for the output zero point
See "Conv3d" for other attributes.
Examples:
>>> # With square kernels and equal stride
>>> m = nn.quantized.Conv3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv3d.html | pytorch docs |
m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2))
>>> # non-square kernels and unequal stride and with padding and dilation
>>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2), dilation=(1, 2, 2))
>>> input = torch.randn(20, 16, 56, 56, 56)
>>> # quantize input to quint8
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
classmethod from_float(mod)
Creates a quantized module from a float module or qparams_dict.
Parameters:
**mod** (*Module*) -- a float module, either produced by
torch.ao.quantization utilities or provided by the user
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv3d.html | pytorch docs |
torch.nn.utils.rnn.pad_sequence
torch.nn.utils.rnn.pad_sequence(sequences, batch_first=False, padding_value=0.0)
Pad a list of variable length Tensors with "padding_value"
"pad_sequence" stacks a list of Tensors along a new dimension, and
pads them to equal length. For example, if the input is list of
sequences with size "L x " and if batch_first is False, and "T x B
x " otherwise.
B is batch size. It is equal to the number of elements in
"sequences". T is length of the longest sequence. L is length
of the sequence. *** is any number of trailing dimensions,
including none.
-[ Example ]-
from torch.nn.utils.rnn import pad_sequence
a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
pad_sequence([a, b, c]).size()
torch.Size([25, 3, 300])
Note:
This function returns a Tensor of size "T x B x *" or "B x T x *"
| https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_sequence.html | pytorch docs |
where T is the length of the longest sequence. This function
assumes trailing dimensions and type of all the Tensors in
sequences are same.
Parameters:
* sequences (list[Tensor]) -- list of variable
length sequences.
* **batch_first** (*bool**, **optional*) -- output will be in "B
x T x *" if True, or in "T x B x *" otherwise. Default: False.
* **padding_value** (*float**, **optional*) -- value for padded
elements. Default: 0.
Returns:
Tensor of size "T x B x " if "batch_first" is "False". Tensor
of size "B x T x " otherwise
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_sequence.html | pytorch docs |
torch.initial_seed
torch.initial_seed()
Returns the initial seed for generating random numbers as a Python
long.
Return type:
int | https://pytorch.org/docs/stable/generated/torch.initial_seed.html | pytorch docs |
load_observer_state_dict
class torch.quantization.observer.load_observer_state_dict(mod, obs_dict)
Given input model and a state_dict containing model observer stats,
load the stats back into the model. The observer state_dict can be
saved using torch.ao.quantization.get_observer_state_dict | https://pytorch.org/docs/stable/generated/torch.quantization.observer.load_observer_state_dict.html | pytorch docs |
torch.scatter_add
torch.scatter_add(input, dim, index, src) -> Tensor
Out-of-place version of "torch.Tensor.scatter_add_()" | https://pytorch.org/docs/stable/generated/torch.scatter_add.html | pytorch docs |
torch.trapezoid
torch.trapezoid(y, x=None, *, dx=None, dim=- 1) -> Tensor
Computes the trapezoidal rule along "dim". By default the spacing
between elements is assumed to be 1, but "dx" can be used to
specify a different constant spacing, and "x" can be used to
specify arbitrary spacing along "dim".
Assuming "y" is a one-dimensional tensor with elements {y_0, y_1,
..., y_n}, the default computation is
\begin{aligned} \sum_{i = 1}^{n-1} \frac{1}{2} (y_i +
y_{i-1}) \end{aligned}
When "dx" is specified the computation becomes
\begin{aligned} \sum_{i = 1}^{n-1} \frac{\Delta x}{2} (y_i +
y_{i-1}) \end{aligned}
effectively multiplying the result by "dx". When "x" is specified,
assuming "x" is also a one-dimensional tensor with elements {x_0,
x_1, ..., x_n}, the computation becomes
\begin{aligned} \sum_{i = 1}^{n-1} \frac{(x_i - x_{i-1})}{2}
(y_i + y_{i-1}) \end{aligned}
| https://pytorch.org/docs/stable/generated/torch.trapezoid.html | pytorch docs |
(y_i + y_{i-1}) \end{aligned}
When "x" and "y" have the same size, the computation is as
described above and no broadcasting is needed. The broadcasting
behavior of this function is as follows when their sizes are
different. For both "x" and "y", the function computes the
difference between consecutive elements along dimension "dim". This
effectively creates two tensors, x_diff and y_diff, that have
the same shape as the original tensors except their lengths along
the dimension "dim" is reduced by 1. After that, those two tensors
are broadcast together to compute final output as part of the
trapezoidal rule. See the examples below for details.
Note:
The trapezoidal rule is a technique for approximating the
definite integral of a function by averaging its left and right
Riemann sums. The approximation becomes more accurate as the
resolution of the partition increases.
Parameters:
* y (Tensor) -- Values to use when computing the | https://pytorch.org/docs/stable/generated/torch.trapezoid.html | pytorch docs |
trapezoidal rule.
* **x** (*Tensor*) -- If specified, defines spacing between
values as specified above.
Keyword Arguments:
* dx (float) -- constant spacing between values. If
neither "x" or "dx" are specified then this defaults to 1.
Effectively multiplies the result by its value.
* **dim** (*int*) -- The dimension along which to compute the
trapezoidal rule. The last (inner-most) dimension by default.
Examples:
>>> # Computes the trapezoidal rule in 1D, spacing is implicitly 1
>>> y = torch.tensor([1, 5, 10])
>>> torch.trapezoid(y)
tensor(10.5)
>>> # Computes the same trapezoidal rule directly to verify
>>> (1 + 10 + 10) / 2
10.5
>>> # Computes the trapezoidal rule in 1D with constant spacing of 2
>>> # NOTE: the result is the same as before, but multiplied by 2
>>> torch.trapezoid(y, dx=2)
21.0
>>> # Computes the trapezoidal rule in 1D with arbitrary spacing
| https://pytorch.org/docs/stable/generated/torch.trapezoid.html | pytorch docs |
x = torch.tensor([1, 3, 6])
>>> torch.trapezoid(y, x)
28.5
>>> # Computes the same trapezoidal rule directly to verify
>>> ((3 - 1) * (1 + 5) + (6 - 3) * (5 + 10)) / 2
28.5
>>> # Computes the trapezoidal rule for each row of a 3x3 matrix
>>> y = torch.arange(9).reshape(3, 3)
tensor([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
>>> torch.trapezoid(y)
tensor([ 2., 8., 14.])
>>> # Computes the trapezoidal rule for each column of the matrix
>>> torch.trapezoid(y, dim=0)
tensor([ 6., 8., 10.])
>>> # Computes the trapezoidal rule for each row of a 3x3 ones matrix
>>> # with the same arbitrary spacing
>>> y = torch.ones(3, 3)
>>> x = torch.tensor([1, 3, 6])
>>> torch.trapezoid(y, x)
array([5., 5., 5.])
>>> # Computes the trapezoidal rule for each row of a 3x3 ones matrix
>>> # with different arbitrary spacing per row
>>> y = torch.ones(3, 3)
| https://pytorch.org/docs/stable/generated/torch.trapezoid.html | pytorch docs |
y = torch.ones(3, 3)
>>> x = torch.tensor([[1, 2, 3], [1, 3, 5], [1, 4, 7]])
>>> torch.trapezoid(y, x)
array([2., 4., 6.])
| https://pytorch.org/docs/stable/generated/torch.trapezoid.html | pytorch docs |
RAdam
class torch.optim.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, *, foreach=None, differentiable=False)
Implements RAdam algorithm.
\begin{aligned} &\rule{110mm}{0.4pt}
\\ &\textbf{input} : \gamma \text{ (lr)}, \: \beta_1,
\beta_2 \text{ (betas)}, \: \theta_0 \text{ (params)},
\:f(\theta) \text{ (objective)}, \: \lambda \text{
(weightdecay)},
\\ &\hspace{13mm} \epsilon \text{ (epsilon)}
\\ &\textbf{initialize} : m_0 \leftarrow 0 \text{ ( first
moment)}, v_0 \leftarrow 0 \text{ ( second moment)},
\\ &\hspace{18mm} \rho_{\infty} \leftarrow 2/(1-\beta_2)
-1 \\[-1.ex] &\rule{110mm}{0.4pt} \\
&\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do}
\\ &\hspace{6mm}g_t \leftarrow \nabla_{\theta}
f_t (\theta_{t-1}) \\ &\hspace{5mm} \textbf{if}
| https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
\: \lambda \neq 0 \
&\hspace{10mm} g_t \leftarrow g_t + \lambda \theta_{t-1}
\ &\hspace{6mm}m_t \leftarrow \beta_1 m_{t-1}
+ (1 - \beta_1) g_t \ &\hspace{6mm}v_t
\leftarrow \beta_2 v_{t-1} + (1-\beta_2) g^2_t \
&\hspace{6mm}\widehat{m_t} \leftarrow m_t/\big(1-\beta_1^t
\big) \ &\hspace{6mm}\rho_t \leftarrow
\rho_{\infty} - 2 t \beta^t_2 /\big(1-\beta_2^t \big)
\[0.1.ex] &\hspace{6mm}\textbf{if} \: \rho_t > 5
\ &\hspace{12mm} l_t \leftarrow \frac{\sqrt{
(1-\beta^t_2) }}{ \sqrt{v_t} +\epsilon } \
&\hspace{12mm} r_t \leftarrow \sqrt{\frac{(\rho_t-4)(\rho_t-2)\
rho_{\infty}}{(\rho_{\infty}-4)(\rho_{\infty}-2) \rho_t}} \
&\hspace{12mm}\theta_t \leftarrow \theta_{t-1} - \gamma
\widehat{m_t} r_t l_t \ &\hspace{6mm}\textbf{else} | https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
\ &\hspace{12mm}\theta_t \leftarrow \theta_{t-1} - \gamma
\widehat{m_t} \ &\rule{110mm}{0.4pt}
\[-1.ex] &\bf{return} \: \theta_t
\[-1.ex] &\rule{110mm}{0.4pt}
\[-1.ex] \end{aligned}
For further details regarding the algorithm we refer to On the
variance of the adaptive learning rate and beyond.
This implementation uses the same weight_decay implementation as
Adam (were the weight_decay is applied to the gradient) and not the
one from AdamW (were weight_decay is applied to the update). This
is different from the author's implementation.
Parameters:
* params (iterable) -- iterable of parameters to optimize
or dicts defining parameter groups
* **lr** (*float**, **optional*) -- learning rate (default:
1e-3)
* **betas** (*Tuple**[**float**, **float**]**, **optional*) --
coefficients used for computing running averages of gradient
| https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
and its square (default: (0.9, 0.999))
* **eps** (*float**, **optional*) -- term added to the
denominator to improve numerical stability (default: 1e-8)
* **weight_decay** (*float**, **optional*) -- weight decay (L2
penalty) (default: 0)
* **foreach** (*bool**, **optional*) -- whether foreach
implementation of optimizer is used. If unspecified by the
user (so foreach is None), we will try to use foreach over the
for-loop implementation on CUDA, since it is usually
significantly more performant. (default: None)
* **differentiable** (*bool**, **optional*) -- whether autograd
should occur through the optimizer step in training.
Otherwise, the step() function runs in a torch.no_grad()
context. Setting to True can impair performance, so leave it
False if you don't intend to run autograd through this
instance (default: False)
add_param_group(param_group) | https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
| https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict() | https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
**set_to_none** (*bool*) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
| https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html | pytorch docs |
PixelUnshuffle
class torch.nn.PixelUnshuffle(downscale_factor)
Reverses the "PixelShuffle" operation by rearranging elements in a
tensor of shape (, C, H \times r, W \times r) to a tensor of shape
(, C \times r^2, H, W), where r is a downscale factor.
See the paper: Real-Time Single Image and Video Super-Resolution
Using an Efficient Sub-Pixel Convolutional Neural Network by Shi
et. al (2016) for more details.
Parameters:
downscale_factor (int) -- factor to decrease spatial
resolution by
Shape:
* Input: (*, C_{in}, H_{in}, W_{in}), where * is zero or more
batch dimensions
* Output: (*, C_{out}, H_{out}, W_{out}), where
C_{out} = C_{in} \times \text{downscale\_factor}^2
H_{out} = H_{in} \div \text{downscale\_factor}
W_{out} = W_{in} \div \text{downscale\_factor}
Examples:
>>> pixel_unshuffle = nn.PixelUnshuffle(3)
>>> input = torch.randn(1, 1, 12, 12)
| https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html | pytorch docs |
input = torch.randn(1, 1, 12, 12)
>>> output = pixel_unshuffle(input)
>>> print(output.size())
torch.Size([1, 9, 4, 4])
| https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html | pytorch docs |
torch.foreach_exp
torch.foreach_exp(self: List[Tensor]) -> None
Apply "torch.exp()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_exp_.html | pytorch docs |
torch.nn.functional.log_softmax
torch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None)
Applies a softmax followed by a logarithm.
While mathematically equivalent to log(softmax(x)), doing these two
operations separately is slower and numerically unstable. This
function uses an alternative formulation to compute the output and
gradient correctly.
See "LogSoftmax" for more details.
Parameters:
* input (Tensor) -- input
* **dim** (*int*) -- A dimension along which log_softmax will be
computed.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is cast to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.log_softmax.html | pytorch docs |
AvgPool3d
class torch.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)
Applies a 3D average pooling over an input signal composed of
several input planes.
In the simplest case, the output value of the layer with input size
(N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and
"kernel_size" (kD, kH, kW) can be precisely described as:
\begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} &
\sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\
& \frac{\text{input}(N_i, C_j, \text{stride}[0] \times d + k,
\text{stride}[1] \times h + m, \text{stride}[2] \times w + n)}
{kD \times kH \times kW} \end{aligned}
If "padding" is non-zero, then the input is implicitly zero-padded
on all three sides for "padding" number of points.
Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds
if they start within the left padding or the input. Sliding
| https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html | pytorch docs |
windows that would start in the right padded region are ignored.
The parameters "kernel_size", "stride" can either be:
* a single "int" -- in which case the same value is used for the
depth, height and width dimension
* a "tuple" of three ints -- in which case, the first *int* is
used for the depth dimension, the second *int* for the height
dimension and the third *int* for the width dimension
Parameters:
* kernel_size (Union[int, Tuple[int, int,
int]]) -- the size of the window
* **stride** (*Union**[**int**, **Tuple**[**int**, **int**,
**int**]**]*) -- the stride of the window. Default value is
"kernel_size"
* **padding** (*Union**[**int**, **Tuple**[**int**, **int**,
**int**]**]*) -- implicit zero padding to be added on all
three sides
* **ceil_mode** (*bool*) -- when True, will use *ceil* instead
of *floor* to compute the output shape
| https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html | pytorch docs |
of floor to compute the output shape
* **count_include_pad** (*bool*) -- when True, will include the
zero-padding in the averaging calculation
* **divisor_override** (*Optional**[**int**]*) -- if specified,
it will be used as divisor, otherwise "kernel_size" will be
used
Shape:
* Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},
W_{in}).
* Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},
H_{out}, W_{out}), where
D_{out} = \left\lfloor\frac{D_{in} + 2 \times
\text{padding}[0] -
\text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor
H_{out} = \left\lfloor\frac{H_{in} + 2 \times
\text{padding}[1] -
\text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor
W_{out} = \left\lfloor\frac{W_{in} + 2 \times
\text{padding}[2] -
\text{kernel\_size}[2]}{\text{stride}[2]} + 1\right\rfloor
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html | pytorch docs |
Examples:
>>> # pool of square window of size=3, stride=2
>>> m = nn.AvgPool3d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2))
>>> input = torch.randn(20, 16, 50, 44, 31)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html | pytorch docs |
torch.Tensor.masked_fill
Tensor.masked_fill(mask, value) -> Tensor
Out-of-place version of "torch.Tensor.masked_fill_()" | https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill.html | pytorch docs |
torch.Tensor.sparse_resize_and_clear_
Tensor.sparse_resize_and_clear_(size, sparse_dim, dense_dim) -> Tensor
Removes all specified elements from a sparse tensor "self" and
resizes "self" to the desired size and the number of sparse and
dense dimensions.
Parameters:
* size (torch.Size) -- the desired size.
* **sparse_dim** (*int*) -- the number of sparse dimensions
* **dense_dim** (*int*) -- the number of dense dimensions
| https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_and_clear_.html | pytorch docs |
torch.nn.utils.stateless.functional_call
torch.nn.utils.stateless.functional_call(module, parameters_and_buffers, args, kwargs=None, *, tie_weights=True)
Performs a functional call on the module by replacing the module
parameters and buffers with the provided ones.
Warning:
This API is deprecated as of PyTorch 2.0 and will be removed in a
future version of PyTorch. Please use
"torch.func.functional_call()" instead, which is a drop-in
replacement for this API.
Note:
If the module has active parametrizations, passing a value in the
"parameters_and_buffers" argument with the name set to the
regular parameter name will completely disable the
parametrization. If you want to apply the parametrization
function to the value passed please set the key as
"{submodule_name}.parametrizations.{parameter_name}.original".
Note:
If the module performs in-place operations on parameters/buffers,
| https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html | pytorch docs |
these will be reflected in the parameters_and_buffers
input.Example:
>>> a = {'foo': torch.zeros(())}
>>> mod = Foo() # does self.foo = self.foo + 1
>>> print(mod.foo) # tensor(0.)
>>> functional_call(mod, a, torch.ones(()))
>>> print(mod.foo) # tensor(0.)
>>> print(a['foo']) # tensor(1.)
Note:
If the module has tied weights, whether or not functional_call
respects the tying is determined by the tie_weights flag.Example:
>>> a = {'foo': torch.zeros(())}
>>> mod = Foo() # has both self.foo and self.foo_tied which are tied. Returns x + self.foo + self.foo_tied
>>> print(mod.foo) # tensor(1.)
>>> mod(torch.zeros(())) # tensor(2.)
>>> functional_call(mod, a, torch.zeros(())) # tensor(0.) since it will change self.foo_tied too
>>> functional_call(mod, a, torch.zeros(()), tie_weights=False) # tensor(1.)--self.foo_tied is not updated
| https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html | pytorch docs |
new_a = {'foo', torch.zeros(()), 'foo_tied': torch.zeros(())}
>>> functional_call(mod, new_a, torch.zeros()) # tensor(0.)
Parameters:
* module (torch.nn.Module) -- the module to call
* **parameters_and_buffers** (*dict of str and Tensor*) -- the
parameters that will be used in the module call.
* **args** (*Any** or **tuple*) -- arguments to be passed to the
module call. If not a tuple, considered a single argument.
* **kwargs** (*dict*) -- keyword arguments to be passed to the
module call
* **tie_weights** (*bool**, **optional*) -- If True, then
parameters and buffers tied in the original model will be
treated as tied in the reparamaterized version. Therefore, if
True and different values are passed for the tied paramaters
and buffers, it will error. If False, it will not respect the
originally tied parameters and buffers unless the values
| https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html | pytorch docs |
passed for both weights are the same. Default: True.
Returns:
the result of calling "module".
Return type:
Any | https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html | pytorch docs |
torch.nn.utils.prune.custom_from_mask
torch.nn.utils.prune.custom_from_mask(module, name, mask)
Prunes tensor corresponding to parameter called "name" in "module"
by applying the pre-computed mask in "mask". Modifies module in
place (and also return the modified module) by:
adding a named buffer called "name+'_mask'" corresponding to the
binary mask applied to the parameter "name" by the pruning
method.
replacing the parameter "name" by its pruned version, while the
original (unpruned) parameter is stored in a new parameter named
"name+'_orig'".
Parameters:
* module (nn.Module) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **mask** (*Tensor*) -- binary mask to be applied to the
parameter.
Returns:
modified (i.e. pruned) version of the input module
Return type: | https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.custom_from_mask.html | pytorch docs |
Return type:
module (nn.Module)
-[ Examples ]-
from torch.nn.utils import prune
m = prune.custom_from_mask(
... nn.Linear(5, 3), name='bias', mask=torch.tensor([0, 1, 0])
... )
print(m.bias_mask)
tensor([0., 1., 0.])
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.custom_from_mask.html | pytorch docs |
torch.Tensor.xlogy
Tensor.xlogy(other) -> Tensor
See "torch.xlogy()" | https://pytorch.org/docs/stable/generated/torch.Tensor.xlogy.html | pytorch docs |
torch.Tensor.softmax
Tensor.softmax(dim) -> Tensor
Alias for "torch.nn.functional.softmax()". | https://pytorch.org/docs/stable/generated/torch.Tensor.softmax.html | pytorch docs |
torch.autograd.functional.jvp
torch.autograd.functional.jvp(func, inputs, v=None, create_graph=False, strict=False)
Function that computes the dot product between the Jacobian of the
given function at the point given by the inputs and a vector "v".
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a tuple of Tensors or a Tensor.
* **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the
function "func".
* **v** (*tuple of Tensors** or **Tensor*) -- The vector for
which the Jacobian vector product is computed. Must be the
same size as the input of "func". This argument is optional
when the input to "func" contains a single element and (if it
is not provided) will be set as a Tensor containing a single
"1".
* **create_graph** (*bool**, **optional*) -- If "True", both the
output and result will be computed in a differentiable way.
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html | pytorch docs |
Note that when "strict" is "False", the result can not require
gradients or be disconnected from the inputs. Defaults to
"False".
* **strict** (*bool**, **optional*) -- If "True", an error will
be raised when we detect that there exists an input such that
all the outputs are independent of it. If "False", we return a
Tensor of zeros as the jvp for said inputs, which is the
expected mathematical value. Defaults to "False".
Returns:
tuple with:
func_output (tuple of Tensors or Tensor): output of
"func(inputs)"
jvp (tuple of Tensors or Tensor): result of the dot product
with the same shape as the output.
Return type:
output (tuple)
Note:
"autograd.functional.jvp" computes the jvp by using the backward
of the backward (sometimes called the double backwards trick).
This is not the most performant way of computing the jvp. Please
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html | pytorch docs |
consider using "torch.func.jvp()" or the low-level forward-mode
AD API instead.
-[ Example ]-
def exp_reducer(x):
... return x.exp().sum(dim=1)
inputs = torch.rand(4, 4)
v = torch.ones(4, 4)
jvp(exp_reducer, inputs, v)
(tensor([6.3090, 4.6742, 7.9114, 8.2106]),
tensor([6.3090, 4.6742, 7.9114, 8.2106]))
jvp(exp_reducer, inputs, v, create_graph=True)
(tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=),
tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=))
def adder(x, y):
... return 2 * x + 3 * y
inputs = (torch.rand(2), torch.rand(2))
v = (torch.ones(2), torch.ones(2))
jvp(adder, inputs, v)
(tensor([2.2399, 2.5005]),
tensor([5., 5.]))
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html | pytorch docs |
torch.func.grad_and_value
torch.func.grad_and_value(func, argnums=0, has_aux=False)
Returns a function to compute a tuple of the gradient and primal,
or forward, computation.
Parameters:
* func (Callable) -- A Python function that takes one or
more arguments. Must return a single-element Tensor. If
specified "has_aux" equals "True", function can return a tuple
of single-element Tensor and other auxiliary objects:
"(output, aux)".
* **argnums** (*int** or **Tuple**[**int**]*) -- Specifies
arguments to compute gradients with respect to. "argnums" can
be single integer or tuple of integers. Default: 0.
* **has_aux** (*bool*) -- Flag indicating that "func" returns a
tensor and other auxiliary objects: "(output, aux)". Default:
False.
Returns:
Function to compute a tuple of gradients with respect to its
inputs and the forward computation. By default, the output of | https://pytorch.org/docs/stable/generated/torch.func.grad_and_value.html | pytorch docs |
the function is a tuple of the gradient tensor(s) with respect
to the first argument and the primal computation. If specified
"has_aux" equals "True", tuple of gradients and tuple of the
forward computation with output auxiliary objects is returned.
If "argnums" is a tuple of integers, a tuple of a tuple of the
output gradients with respect to each "argnums" value and the
forward computation is returned.
Return type:
Callable
See "grad()" for examples | https://pytorch.org/docs/stable/generated/torch.func.grad_and_value.html | pytorch docs |
torch.Tensor.size
Tensor.size(dim=None) -> torch.Size or int
Returns the size of the "self" tensor. If "dim" is not specified,
the returned value is a "torch.Size", a subclass of "tuple". If
"dim" is specified, returns an int holding the size of that
dimension.
Parameters:
dim (int, optional) -- The dimension for which to
retrieve the size.
Example:
>>> t = torch.empty(3, 4, 5)
>>> t.size()
torch.Size([3, 4, 5])
>>> t.size(dim=1)
4
| https://pytorch.org/docs/stable/generated/torch.Tensor.size.html | pytorch docs |
torch.Tensor.bitwise_xor
Tensor.bitwise_xor() -> Tensor
See "torch.bitwise_xor()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_xor.html | pytorch docs |
torch.func.hessian
torch.func.hessian(func, argnums=0)
Computes the Hessian of "func" with respect to the arg(s) at index
"argnum" via a forward-over-reverse strategy.
The forward-over-reverse strategy (composing
"jacfwd(jacrev(func))") is a good default for good performance. It
is possible to compute Hessians through other compositions of
"jacfwd()" and "jacrev()" like "jacfwd(jacfwd(func))" or
"jacrev(jacrev(func))".
Parameters:
* func (function) -- A Python function that takes one or
more arguments, one of which must be a Tensor, and returns one
or more Tensors
* **argnums** (*int** or **Tuple**[**int**]*) -- Optional,
integer or tuple of integers, saying which arguments to get
the Hessian with respect to. Default: 0.
Returns:
Returns a function that takes in the same inputs as "func" and
returns the Hessian of "func" with respect to the arg(s) at
"argnums".
Note: | https://pytorch.org/docs/stable/generated/torch.func.hessian.html | pytorch docs |
"argnums".
Note:
You may see this API error out with "forward-mode AD not
implemented for operator X". If so, please file a bug report and
we will prioritize it. An alternative is to use
"jacrev(jacrev(func))", which has better operator coverage.
A basic usage with a R^N -> R^1 function gives a N x N Hessian:
from torch.func import hessian
def f(x):
return x.sin().sum()
x = torch.randn(5)
hess = hessian(f)(x) # equivalent to jacfwd(jacrev(f))(x)
assert torch.allclose(hess, torch.diag(-x.sin()))
| https://pytorch.org/docs/stable/generated/torch.func.hessian.html | pytorch docs |
torch.slogdet
torch.slogdet(input)
Alias for "torch.linalg.slogdet()" | https://pytorch.org/docs/stable/generated/torch.slogdet.html | pytorch docs |
torch.broadcast_tensors
torch.broadcast_tensors(*tensors) -> List of Tensors
Broadcasts the given tensors according to Broadcasting semantics.
Parameters:
*tensors -- any number of tensors of the same type
Warning:
More than one element of a broadcasted tensor may refer to a
single memory location. As a result, in-place operations
(especially ones that are vectorized) may result in incorrect
behavior. If you need to write to the tensors, please clone them
first.
Example:
>>> x = torch.arange(3).view(1, 3)
>>> y = torch.arange(2).view(2, 1)
>>> a, b = torch.broadcast_tensors(x, y)
>>> a.size()
torch.Size([2, 3])
>>> a
tensor([[0, 1, 2],
[0, 1, 2]])
| https://pytorch.org/docs/stable/generated/torch.broadcast_tensors.html | pytorch docs |
torch.autograd.profiler.profile.total_average
profile.total_average()
Averages all events.
Returns:
A FunctionEventAvg object. | https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.total_average.html | pytorch docs |
torch.greater_equal
torch.greater_equal(input, other, *, out=None) -> Tensor
Alias for "torch.ge()". | https://pytorch.org/docs/stable/generated/torch.greater_equal.html | pytorch docs |
torch.Tensor.qr
Tensor.qr(some=True)
See "torch.qr()" | https://pytorch.org/docs/stable/generated/torch.Tensor.qr.html | pytorch docs |
torch.Tensor.mv
Tensor.mv(vec) -> Tensor
See "torch.mv()" | https://pytorch.org/docs/stable/generated/torch.Tensor.mv.html | pytorch docs |
ObservationType
class torch.ao.quantization.backend_config.ObservationType(value)
An enum that represents different ways of how an operator/operator
pattern should be observed
OUTPUT_SHARE_OBSERVER_WITH_INPUT = 1
this means the output will use the same observer instance as
input, based on qconfig.activation example: torch.cat, maxpool
OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT = 0
this means input and output are observed with different
observers, based on qconfig.activation example: conv, linear,
softmax
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.ObservationType.html | pytorch docs |
torch.jit.script
torch.jit.script(obj, optimize=None, _frames_up=0, _rcb=None, example_inputs=None)
Scripting a function or "nn.Module" will inspect the source code,
compile it as TorchScript code using the TorchScript compiler, and
return a "ScriptModule" or "ScriptFunction". TorchScript itself is
a subset of the Python language, so not all features in Python
work, but we provide enough functionality to compute on tensors and
do control-dependent operations. For a complete guide, see the
TorchScript Language Reference.
Scripting a dictionary or list copies the data inside it into a
TorchScript instance than can be subsequently passed by reference
between Python and TorchScript with zero copy overhead.
"torch.jit.script" can be used as a function for modules,
functions, dictionaries and lists
and as a decorator "@torch.jit.script" for TorchScript Classes
and functions.
Parameters: | https://pytorch.org/docs/stable/generated/torch.jit.script.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.