text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
LazyLinear
class torch.nn.LazyLinear(out_features, bias=True, device=None, dtype=None)
A "torch.nn.Linear" module where in_features is inferred.
In this module, the weight and bias are of
"torch.nn.UninitializedParameter" class. They will be initialized
after the first call to "forward" is done and the module will
become a regular "torch.nn.Linear" module. The "in_features"
argument of the "Linear" is inferred from the "input.shape[-1]".
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* out_features (int) -- size of each output sample
* **bias** (*UninitializedParameter*) -- If set to "False", the
layer will not learn an additive bias. Default: "True"
Variables:
* weight (torch.nn.parameter.UninitializedParameter) --
the learnable weights of the module of shape
(\text{out_features}, \text{in_features}). The values are | https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html | pytorch docs |
initialized from \mathcal{U}(-\sqrt{k}, \sqrt{k}), where k =
\frac{1}{\text{in_features}}
* **bias** (*torch.nn.parameter.UninitializedParameter*) -- the
learnable bias of the module of shape (\text{out\_features}).
If "bias" is "True", the values are initialized from
\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{in\_features}}
cls_to_become
alias of "Linear"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html | pytorch docs |
torch.nn.functional.triplet_margin_with_distance_loss
torch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean')
See "TripletMarginWithDistanceLoss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.triplet_margin_with_distance_loss.html | pytorch docs |
MultiheadAttention
class torch.nn.quantizable.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)
dequantize()
Utility to convert the quantized MHA back to float.
The motivation for this is that it is not trivial to conver the
weights from the format that is used in the quantized version
back to the float.
forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False)
Note::
Please, refer to "forward()" for more information
Parameters:
* **query** (*Tensor*) -- map a query and a set of key-value
pairs to an output. See "Attention Is All You Need" for
more details.
* **key** (*Tensor*) -- map a query and a set of key-value
pairs to an output. See "Attention Is All You Need" for
more details.
| https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html | pytorch docs |
more details.
* **value** (*Tensor*) -- map a query and a set of key-value
pairs to an output. See "Attention Is All You Need" for
more details.
* **key_padding_mask** (*Optional**[**Tensor**]*) -- if
provided, specified padding elements in the key will be
ignored by the attention. When given a binary mask and a
value is True, the corresponding value on the attention
layer will be ignored. When given a byte mask and a value
is non-zero, the corresponding value on the attention layer
will be ignored
* **need_weights** (*bool*) -- output attn_output_weights.
* **attn_mask** (*Optional**[**Tensor**]*) -- 2D or 3D mask
that prevents attention to certain positions. A 2D mask
will be broadcasted for all the batches while a 3D mask
allows to specify a different mask for the entries of each
batch.
Return type:
| https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html | pytorch docs |
batch.
Return type:
*Tuple*[*Tensor*, *Optional*[*Tensor*]]
Shape:
* Inputs:
* query: (L, N, E) where L is the target sequence length, N
is the batch size, E is the embedding dimension. (N, L, E)
if "batch_first" is "True".
* key: (S, N, E), where S is the source sequence length, N is
the batch size, E is the embedding dimension. (N, S, E) if
"batch_first" is "True".
* value: (S, N, E) where S is the source sequence length, N
is the batch size, E is the embedding dimension. (N, S, E)
if "batch_first" is "True".
* key_padding_mask: (N, S) where N is the batch size, S is
the source sequence length. If a ByteTensor is provided,
the non-zero positions will be ignored while the position
with the zero positions will be unchanged. If a BoolTensor
is provided, the positions with the value of "True" will be
| https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html | pytorch docs |
ignored while the position with the value of "False" will
be unchanged.
* attn_mask: 2D mask (L, S) where L is the target sequence
length, S is the source sequence length. 3D mask
(N*num_heads, L, S) where N is the batch size, L is the
target sequence length, S is the source sequence length.
attn_mask ensure that position i is allowed to attend the
unmasked positions. If a ByteTensor is provided, the non-
zero positions are not allowed to attend while the zero
positions will be unchanged. If a BoolTensor is provided,
positions with "True" is not allowed to attend while
"False" values will be unchanged. If a FloatTensor is
provided, it will be added to the attention weight.
* is_causal: If specified, applies a causal mask as attention
mask. Mutually exclusive with providing attn_mask. Default:
"False".
| https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html | pytorch docs |
"False".
* average_attn_weights: If true, indicates that the returned
"attn_weights" should be averaged across heads. Otherwise,
"attn_weights" are provided separately per head. Note that
this flag only has an effect when "need_weights=True.".
Default: True (i.e. average weights across heads)
* Outputs:
* attn_output: (L, N, E) where L is the target sequence
length, N is the batch size, E is the embedding dimension.
(N, L, E) if "batch_first" is "True".
* attn_output_weights: If "average_attn_weights=True",
returns attention weights averaged across heads of shape
(N, L, S), where N is the batch size, L is the target
sequence length, S is the source sequence length. If
"average_attn_weights=False", returns attention weights per
head of shape (N, num_heads, L, S).
| https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html | pytorch docs |
torch.ge
torch.ge(input, other, *, out=None) -> Tensor
Computes \text{input} \geq \text{other} element-wise.
The second argument can be a number or a tensor whose shape is
broadcastable with the first argument.
Parameters:
* input (Tensor) -- the tensor to compare
* **other** (*Tensor** or **float*) -- the tensor or value to
compare
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Returns:
A boolean tensor that is True where "input" is greater than or
equal to "other" and False elsewhere
Example:
>>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))
tensor([[True, True], [False, True]])
| https://pytorch.org/docs/stable/generated/torch.ge.html | pytorch docs |
torch.Tensor.map_
Tensor.map_(tensor, callable)
Applies "callable" for each element in "self" tensor and the given
"tensor" and stores the results in "self" tensor. "self" tensor and
the given "tensor" must be broadcastable.
The "callable" should have the signature:
def callable(a, b) -> number
| https://pytorch.org/docs/stable/generated/torch.Tensor.map_.html | pytorch docs |
Conv2d
class torch.ao.nn.qat.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None, device=None, dtype=None)
A Conv2d module attached with FakeQuantize modules for weight, used
for quantization aware training.
We adopt the same interface as torch.nn.Conv2d, please see https
://pytorch.org/docs/stable/nn.html?highlight=conv2d#torch.nn.Conv2d
for documentation.
Similar to torch.nn.Conv2d, with FakeQuantize modules initialized
to default.
Variables:
weight_fake_quant -- fake quant module for weight | https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Conv2d.html | pytorch docs |
linear
class torch.ao.nn.quantized.functional.linear(input, weight, bias=None, scale=None, zero_point=None)
Applies a linear transformation to the incoming quantized data: y =
xA^T + b. See "Linear"
Note:
Current implementation packs weights on every call, which has
penalty on performance. If you want to avoid the overhead, use
"Linear".
Parameters:
* input (Tensor) -- Quantized input of type torch.quint8
* **weight** (*Tensor*) -- Quantized weight of type
*torch.qint8*
* **bias** (*Tensor*) -- None or fp32 bias of type *torch.float*
* **scale** (*double*) -- output scale. If None, derived from
the input scale
* **zero_point** (*python:long*) -- output zero point. If None,
derived from the input zero_point
Return type:
Tensor
Shape:
* Input: (N, *, in_features) where *** means any number of
additional dimensions
* Weight: (out\_features, in\_features)
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.linear.html | pytorch docs |
Weight: (out_features, in_features)
Bias: (out_features)
Output: (N, *, out_features)
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.linear.html | pytorch docs |
MovingAveragePerChannelMinMaxObserver
class torch.quantization.observer.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None, eps=1.1920928955078125e-07, **kwargs)
Observer module for computing the quantization parameters based on
the running per channel min and max values.
This observer uses the tensor min/max statistics to compute the per
channel quantization parameters. The module records the running
minimum and maximum of incoming tensors, and uses this statistic to
compute the quantization parameters.
Parameters:
* averaging_constant -- Averaging constant for min/max.
* **ch_axis** -- Channel axis
* **dtype** -- Quantized data type
* **qscheme** -- Quantization scheme to be used
* **reduce_range** -- Reduces the range of the quantized data
type by 1 bit
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAveragePerChannelMinMaxObserver.html | pytorch docs |
type by 1 bit
* **quant_min** -- Minimum quantization value. If unspecified,
it will follow the 8-bit setup.
* **quant_max** -- Maximum quantization value. If unspecified,
it will follow the 8-bit setup.
* **eps** (*Tensor*) -- Epsilon value for float32, Defaults to
*torch.finfo(torch.float32).eps*.
The quantization parameters are computed the same way as in
"MovingAverageMinMaxObserver", with the difference that the running
min/max values are stored per channel. Scales and zero points are
thus computed per channel as well.
Note:
If the running minimum equals to the running maximum, the scales
and zero_points are set to 1.0 and 0.
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAveragePerChannelMinMaxObserver.html | pytorch docs |
torch.nn.functional.hardswish
torch.nn.functional.hardswish(input, inplace=False)
Applies the hardswish function, element-wise, as described in the
paper:
Searching for MobileNetV3.
\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3,
\\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 &
\text{otherwise} \end{cases}
See "Hardswish" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.hardswish.html | pytorch docs |
torch.quantized_batch_norm
torch.quantized_batch_norm(input, weight=None, bias=None, mean, var, eps, output_scale, output_zero_point) -> Tensor
Applies batch normalization on a 4D (NCHW) quantized tensor.
y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
Parameters:
* input (Tensor) -- quantized tensor
* **weight** (*Tensor*) -- float tensor that corresponds to the
gamma, size C
* **bias** (*Tensor*) -- float tensor that corresponds to the
beta, size C
* **mean** (*Tensor*) -- float mean value in batch
normalization, size C
* **var** (*Tensor*) -- float tensor for variance, size C
* **eps** (*float*) -- a value added to the denominator for
numerical stability.
* **output_scale** (*float*) -- output quantized tensor scale
* **output_zero_point** (*int*) -- output quantized tensor
zero_point
Returns: | https://pytorch.org/docs/stable/generated/torch.quantized_batch_norm.html | pytorch docs |
zero_point
Returns:
A quantized tensor with batch normalization applied.
Return type:
Tensor
Example:
>>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)
>>> torch.quantized_batch_norm(qx, torch.ones(2), torch.zeros(2), torch.rand(2), torch.rand(2), 0.00001, 0.2, 2)
tensor([[[[-0.2000, -0.2000],
[ 1.6000, -0.2000]],
[[-0.4000, -0.4000],
[-0.4000, 0.6000]]],
[[[-0.2000, -0.2000],
[-0.2000, -0.2000]],
[[ 0.6000, -0.4000],
[ 0.6000, -0.4000]]]], size=(2, 2, 2, 2), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=2)
| https://pytorch.org/docs/stable/generated/torch.quantized_batch_norm.html | pytorch docs |
torch.linalg.cholesky_ex
torch.linalg.cholesky_ex(A, *, upper=False, check_errors=False, out=None)
Computes the Cholesky decomposition of a complex Hermitian or real
symmetric positive-definite matrix.
This function skips the (slow) error checking and error message
construction of "torch.linalg.cholesky()", instead directly
returning the LAPACK error codes as part of a named tuple "(L,
info)". This makes this function a faster way to check if a matrix
is positive-definite, and it provides an opportunity to handle
decomposition errors more gracefully or performantly than
"torch.linalg.cholesky()" does.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
If "A" is not a Hermitian positive-definite matrix, or if it's a
batch of matrices and one or more of them is not a Hermitian | https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html | pytorch docs |
positive-definite matrix, then "info" stores a positive integer for
the corresponding matrix. The positive integer indicates the order
of the leading minor that is not positive-definite, and the
decomposition could not be completed. "info" filled with zeros
indicates that the decomposition was successful. If
"check_errors=True" and "info" contains positive integers, then a
RuntimeError is thrown.
Note:
When the inputs are on a CUDA device, this function synchronizes
only when "check_errors"*= True*.
Warning:
This function is "experimental" and it may change in a future
PyTorch release.
See also:
"torch.linalg.cholesky()" is a NumPy compatible variant that
always checks for errors.
Parameters:
A (Tensor) -- the Hermitian n times n matrix or the
batch of such matrices of size (, n, n)* where *** is one or
more batch dimensions.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html | pytorch docs |
Keyword Arguments:
* upper (bool, optional) -- whether to return an upper
triangular matrix. The tensor returned with upper=True is the
conjugate transpose of the tensor returned with upper=False.
* **check_errors** (*bool**, **optional*) -- controls whether to
check the content of "infos". Default: *False*.
* **out** (*tuple**, **optional*) -- tuple of two tensors to
write the output to. Ignored if *None*. Default: *None*.
Examples:
>>> A = torch.randn(2, 2, dtype=torch.complex128)
>>> A = A @ A.t().conj() # creates a Hermitian positive-definite matrix
>>> L, info = torch.linalg.cholesky_ex(A)
>>> A
tensor([[ 2.3792+0.0000j, -0.9023+0.9831j],
[-0.9023-0.9831j, 0.8757+0.0000j]], dtype=torch.complex128)
>>> L
tensor([[ 1.5425+0.0000j, 0.0000+0.0000j],
[-0.5850-0.6374j, 0.3567+0.0000j]], dtype=torch.complex128)
>>> info
tensor(0, dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html | pytorch docs |
default_observer
torch.quantization.observer.default_observer
alias of functools.partial(, quant_min=0,
quant_max=127){} | https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_observer.html | pytorch docs |
torch.Tensor.arctanh_
Tensor.arctanh_(other) -> Tensor
In-place version of "arctanh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arctanh_.html | pytorch docs |
torch.Tensor.neg_
Tensor.neg_() -> Tensor
In-place version of "neg()" | https://pytorch.org/docs/stable/generated/torch.Tensor.neg_.html | pytorch docs |
torch.Tensor.sparse_mask
Tensor.sparse_mask(mask) -> Tensor
Returns a new sparse tensor with values from a strided tensor
"self" filtered by the indices of the sparse tensor "mask". The
values of "mask" sparse tensor are ignored. "self" and "mask"
tensors must have the same shape.
Note:
The returned sparse tensor might contain duplicate values if
"mask" is not coalesced. It is therefore advisable to pass
"mask.coalesce()" if such behavior is not desired.
Note:
The returned sparse tensor has the same indices as the sparse
tensor "mask", even when the corresponding values in "self" are
zeros.
Parameters:
mask (Tensor) -- a sparse tensor whose indices are used as
a filter
Example:
>>> nse = 5
>>> dims = (5, 5, 2, 2)
>>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)),
... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse)
| https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html | pytorch docs |
V = torch.randn(nse, dims[2], dims[3])
>>> S = torch.sparse_coo_tensor(I, V, dims).coalesce()
>>> D = torch.randn(dims)
>>> D.sparse_mask(S)
tensor(indices=tensor([[0, 0, 0, 2],
[0, 1, 4, 3]]),
values=tensor([[[ 1.6550, 0.2397],
[-0.1611, -0.0779]],
[[ 0.2326, -1.0558],
[ 1.4711, 1.9678]],
[[-0.5138, -0.0411],
[ 1.9417, 0.5158]],
[[ 0.0793, 0.0036],
[-0.2569, -0.1055]]]),
size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)
| https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html | pytorch docs |
torch.cuda.max_memory_cached
torch.cuda.max_memory_cached(device=None)
Deprecated; see "max_memory_reserved()".
Return type:
int | https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_cached.html | pytorch docs |
KLDivLoss
class torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)
The Kullback-Leibler divergence loss.
For tensors of the same shape y_{\text{pred}},\ y_{\text{true}},
where y_{\text{pred}} is the "input" and y_{\text{true}} is the
"target", we define the pointwise KL-divergence as
L(y_{\text{pred}},\ y_{\text{true}}) = y_{\text{true}} \cdot
\log \frac{y_{\text{true}}}{y_{\text{pred}}} =
y_{\text{true}} \cdot (\log y_{\text{true}} - \log
y_{\text{pred}})
To avoid underflow issues when computing this quantity, this loss
expects the argument "input" in the log-space. The argument
"target" may also be provided in the log-space if "log_target"=
True.
To summarise, this function is roughly equivalent to computing
if not log_target: # default
loss_pointwise = target * (target.log() - input)
else:
loss_pointwise = target.exp() * (target - input)
| https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html | pytorch docs |
and then reducing this result depending on the argument "reduction"
as
if reduction == "mean": # default
loss = loss_pointwise.mean()
elif reduction == "batchmean": # mathematically correct
loss = loss_pointwise.sum() / input.size(0)
elif reduction == "sum":
loss = loss_pointwise.sum()
else: # reduction == "none"
loss = loss_pointwise
Note:
As all the other losses in PyTorch, this function expects the
first argument, "input", to be the output of the model (e.g. the
neural network) and the second, "target", to be the observations
in the dataset. This differs from the standard mathematical
notation KL(P\ ||\ Q) where P denotes the distribution of the
observations and Q denotes the model.
Warning:
"reduction"*= "mean"* doesn't return the true KL divergence
value, please use "reduction"*= "batchmean"* which aligns with
the mathematical definition. In a future release, *"mean"* will
| https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html | pytorch docs |
be changed to be the same as "batchmean".
Parameters:
* size_average (bool, optional) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to False, the losses are instead summed for each
minibatch. Ignored when "reduce" is False. Default: True
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is *False*, returns a loss per
batch element instead and ignores "size_average". Default:
*True*
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output. Default: *"mean"*
* **log_target** (*bool**, **optional*) -- Specifies whether
| https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html | pytorch docs |
target is the log space. Default: False
Shape:
* Input: (*), where * means any number of dimensions.
* Target: (*), same shape as the input.
* Output: scalar by default. If "reduction" is *'none'*, then
(*), same shape as the input.
Examples:
>>> import torch.nn.functional as F
>>> kl_loss = nn.KLDivLoss(reduction="batchmean")
>>> # input should be a distribution in the log space
>>> input = F.log_softmax(torch.randn(3, 5, requires_grad=True), dim=1)
>>> # Sample a batch of distributions. Usually this would come from the dataset
>>> target = F.softmax(torch.rand(3, 5), dim=1)
>>> output = kl_loss(input, target)
>>> kl_loss = nn.KLDivLoss(reduction="batchmean", log_target=True)
>>> log_target = F.log_softmax(torch.rand(3, 5), dim=1)
>>> output = kl_loss(input, log_target)
| https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html | pytorch docs |
ChainedScheduler
class torch.optim.lr_scheduler.ChainedScheduler(schedulers)
Chains list of learning rate schedulers. It takes a list of
chainable learning rate schedulers and performs consecutive step()
functions belonging to them by just one call.
Parameters:
schedulers (list) -- List of chained schedulers.
-[ Example ]-
Assuming optimizer uses lr = 1. for all groups
lr = 0.09 if epoch == 0
lr = 0.081 if epoch == 1
lr = 0.729 if epoch == 2
lr = 0.6561 if epoch == 3
lr = 0.59049 if epoch >= 4
scheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2)
scheduler2 = ExponentialLR(self.opt, gamma=0.9)
scheduler = ChainedScheduler([scheduler1, scheduler2])
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict) | https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html | pytorch docs |
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer. The wrapped scheduler states will also be
saved.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html | pytorch docs |
torch.fmax
torch.fmax(input, other, *, out=None) -> Tensor
Computes the element-wise maximum of "input" and "other".
This is like "torch.maximum()" except it handles NaNs differently:
if exactly one of the two elements being compared is a NaN then the
non-NaN element is taken as the maximum. Only if both elements are
NaN is NaN propagated.
This function is a wrapper around C++'s "std::fmax" and is similar
to NumPy's "fmax" function.
Supports broadcasting to a common shape, type promotion, and
integer and floating-point inputs.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([9.7, float('nan'), 3.1, float('nan')])
>>> b = torch.tensor([-2.2, 0.5, float('nan'), float('nan')])
>>> torch.fmax(a, b)
tensor([9.7000, 0.5000, 3.1000, nan])
| https://pytorch.org/docs/stable/generated/torch.fmax.html | pytorch docs |
get_observer_state_dict
class torch.quantization.observer.get_observer_state_dict(mod)
Returns the state dict corresponding to the observer stats.
Traverse the model state_dict and extract out the stats. | https://pytorch.org/docs/stable/generated/torch.quantization.observer.get_observer_state_dict.html | pytorch docs |
torch.acosh
torch.acosh(input, *, out=None) -> Tensor
Returns a new tensor with the inverse hyperbolic cosine of the
elements of "input".
\text{out}_{i} = \cosh^{-1}(\text{input}_{i})
Note:
The domain of the inverse hyperbolic cosine is *[1, inf)* and
values outside this range will be mapped to "NaN", except for *+
INF* for which the output is mapped to *+ INF*.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4).uniform_(1, 2)
>>> a
tensor([ 1.3192, 1.9915, 1.9674, 1.7151 ])
>>> torch.acosh(a)
tensor([ 0.7791, 1.3120, 1.2979, 1.1341 ])
| https://pytorch.org/docs/stable/generated/torch.acosh.html | pytorch docs |
torch.div
torch.div(input, other, *, rounding_mode=None, out=None) -> Tensor
Divides each element of the input "input" by the corresponding
element of "other".
\text{out}_i = \frac{\text{input}_i}{\text{other}_i}
Note:
By default, this performs a "true" division like Python 3. See
the "rounding_mode" argument for floor division.
Supports broadcasting to a common shape, type promotion, and
integer, float, and complex inputs. Always promotes integer types
to the default scalar type.
Parameters:
* input (Tensor) -- the dividend
* **other** (*Tensor** or **Number*) -- the divisor
Keyword Arguments:
* rounding_mode (str, optional) --
Type of rounding applied to the result:
* None - default behavior. Performs no rounding and, if both
"input" and "other" are integer types, promotes the inputs
to the default scalar type. Equivalent to true division in
| https://pytorch.org/docs/stable/generated/torch.div.html | pytorch docs |
Python (the "/" operator) and NumPy's "np.true_divide".
* ""trunc"" - rounds the results of the division towards zero.
Equivalent to C-style integer division.
* ""floor"" - rounds the results of the division down.
Equivalent to floor division in Python (the "//" operator)
and NumPy's "np.floor_divide".
* **out** (*Tensor**, **optional*) -- the output tensor.
Examples:
>>> x = torch.tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637])
>>> torch.div(x, 0.5)
tensor([ 0.7620, 2.5548, -0.5944, -0.7438, 0.9274])
>>> a = torch.tensor([[-0.3711, -1.9353, -0.4605, -0.2917],
... [ 0.1815, -1.0111, 0.9805, -1.5923],
... [ 0.1062, 1.4581, 0.7759, -1.2344],
... [-0.1830, -0.0313, 1.1908, -1.4757]])
>>> b = torch.tensor([ 0.8032, 0.2930, -0.8113, -0.2308])
>>> torch.div(a, b)
tensor([[-0.4620, -6.6051, 0.5676, 1.2639],
| https://pytorch.org/docs/stable/generated/torch.div.html | pytorch docs |
[ 0.2260, -3.4509, -1.2086, 6.8990],
[ 0.1322, 4.9764, -0.9564, 5.3484],
[-0.2278, -0.1068, -1.4678, 6.3938]])
>>> torch.div(a, b, rounding_mode='trunc')
tensor([[-0., -6., 0., 1.],
[ 0., -3., -1., 6.],
[ 0., 4., -0., 5.],
[-0., -0., -1., 6.]])
>>> torch.div(a, b, rounding_mode='floor')
tensor([[-1., -7., 0., 1.],
[ 0., -4., -2., 6.],
[ 0., 4., -1., 5.],
[-1., -1., -2., 6.]])
| https://pytorch.org/docs/stable/generated/torch.div.html | pytorch docs |
torch.Tensor.index_reduce_
Tensor.index_reduce_(dim, index, source, reduce, *, include_self=True) -> Tensor
Accumulate the elements of "source" into the "self" tensor by
accumulating to the indices in the order given in "index" using the
reduction given by the "reduce" argument. For example, if "dim ==
0", "index[i] == j", "reduce == prod" and "include_self == True"
then the "i"th row of "source" is multiplied by the "j"th row of
"self". If "include_self="True"", the values in the "self" tensor
are included in the reduction, otherwise, rows in the "self" tensor
that are accumulated to are treated as if they were filled with the
reduction identites.
The "dim"th dimension of "source" must have the same size as the
length of "index" (which must be a vector), and all other
dimensions must match "self", or an error will be raised.
For a 3-D tensor with "reduce="prod"" and "include_self=True" the
output is given as: | https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html | pytorch docs |
output is given as:
self[index[i], :, :] *= src[i, :, :] # if dim == 0
self[:, index[i], :] *= src[:, i, :] # if dim == 1
self[:, :, index[i]] *= src[:, :, i] # if dim == 2
Note:
This operation may behave nondeterministically when given tensors
on a CUDA device. See Reproducibility for more information.
Note:
This function only supports floating point tensors.
Warning:
This function is in beta and may change in the near future.
Parameters:
* dim (int) -- dimension along which to index
* **index** (*Tensor*) -- indices of "source" to select from,
should have dtype either *torch.int64* or *torch.int32*
* **source** (*FloatTensor*) -- the tensor containing values to
accumulate
* **reduce** (*str*) -- the reduction operation to apply
(""prod"", ""mean"", ""amax"", ""amin"")
Keyword Arguments:
include_self (bool) -- whether the elements from the | https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html | pytorch docs |
"self" tensor are included in the reduction
Example:
>>> x = torch.empty(5, 3).fill_(2)
>>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype=torch.float)
>>> index = torch.tensor([0, 4, 2, 0])
>>> x.index_reduce_(0, index, t, 'prod')
tensor([[20., 44., 72.],
[ 2., 2., 2.],
[14., 16., 18.],
[ 2., 2., 2.],
[ 8., 10., 12.]])
>>> x = torch.empty(5, 3).fill_(2)
>>> x.index_reduce_(0, index, t, 'prod', include_self=False)
tensor([[10., 22., 36.],
[ 2., 2., 2.],
[ 7., 8., 9.],
[ 2., 2., 2.],
[ 4., 5., 6.]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html | pytorch docs |
torch.autograd.functional.vhp
torch.autograd.functional.vhp(func, inputs, v=None, create_graph=False, strict=False)
Function that computes the dot product between a vector "v" and the
Hessian of a given scalar function at the point given by the
inputs.
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a Tensor with a single element.
* **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the
function "func".
* **v** (*tuple of Tensors** or **Tensor*) -- The vector for
which the vector Hessian product is computed. Must be the same
size as the input of "func". This argument is optional when
"func"'s input contains a single element and (if it is not
provided) will be set as a Tensor containing a single "1".
* **create_graph** (*bool**, **optional*) -- If "True", both the
output and result will be computed in a differentiable way.
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html | pytorch docs |
Note that when "strict" is "False", the result can not require
gradients or be disconnected from the inputs. Defaults to
"False".
* **strict** (*bool**, **optional*) -- If "True", an error will
be raised when we detect that there exists an input such that
all the outputs are independent of it. If "False", we return a
Tensor of zeros as the vhp for said inputs, which is the
expected mathematical value. Defaults to "False".
Returns:
tuple with:
func_output (tuple of Tensors or Tensor): output of
"func(inputs)"
vhp (tuple of Tensors or Tensor): result of the dot product
with the same shape as the inputs.
Return type:
output (tuple)
-[ Example ]-
def pow_reducer(x):
... return x.pow(3).sum()
inputs = torch.rand(2, 2)
v = torch.ones(2, 2)
vhp(pow_reducer, inputs, v)
(tensor(0.5591),
tensor([[1.0689, 1.2431],
[3.0989, 4.4456]]))
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html | pytorch docs |
[3.0989, 4.4456]]))
vhp(pow_reducer, inputs, v, create_graph=True)
(tensor(0.5591, grad_fn=),
tensor([[1.0689, 1.2431],
[3.0989, 4.4456]], grad_fn=))
def pow_adder_reducer(x, y):
... return (2 * x.pow(2) + 3 * y.pow(2)).sum()
inputs = (torch.rand(2), torch.rand(2))
v = (torch.zeros(2), torch.ones(2))
vhp(pow_adder_reducer, inputs, v)
(tensor(4.8053),
(tensor([0., 0.]),
tensor([6., 6.])))
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html | pytorch docs |
torch.nn.utils.prune.ln_structured
torch.nn.utils.prune.ln_structured(module, name, amount, n, dim, importance_scores=None)
Prunes tensor corresponding to parameter called "name" in "module"
by removing the specified "amount" of (currently unpruned) channels
along the specified "dim" with the lowest L"n"-norm. Modifies
module in place (and also return the modified module) by:
adding a named buffer called "name+'_mask'" corresponding to the
binary mask applied to the parameter "name" by the pruning
method.
replacing the parameter "name" by its pruned version, while the
original (unpruned) parameter is stored in a new parameter named
"name+'_orig'".
Parameters:
* module (nn.Module) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **amount** (*int** or **float*) -- quantity of parameters to
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html | pytorch docs |
prune. If "float", should be between 0.0 and 1.0 and represent
the fraction of parameters to prune. If "int", it represents
the absolute number of parameters to prune.
* **n** (*int**, **float**, **inf**, **-inf**, **'fro'**,
**'nuc'*) -- See documentation of valid entries for argument
"p" in "torch.norm()".
* **dim** (*int*) -- index of the dim along which we define
channels to prune.
* **importance_scores** (*torch.Tensor*) -- tensor of importance
scores (of same shape as module parameter) used to compute
mask for pruning. The values in this tensor indicate the
importance of the corresponding elements in the parameter
being pruned. If unspecified or None, the module parameter
will be used in its place.
Returns:
modified (i.e. pruned) version of the input module
Return type:
module (nn.Module)
-[ Examples ]-
from torch.nn.utils import prune
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html | pytorch docs |
from torch.nn.utils import prune
m = prune.ln_structured(
... nn.Conv2d(5, 3, 2), 'weight', amount=0.3, dim=1, n=float('-inf')
... )
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html | pytorch docs |
torch.foreach_trunc
torch.foreach_trunc(self: List[Tensor]) -> None
Apply "torch.trunc()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_trunc_.html | pytorch docs |
default_dynamic_qconfig
torch.quantization.qconfig.default_dynamic_qconfig
alias of QConfig(activation=functools.partial(,
dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){},
weight=functools.partial(,
dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){}) | https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_dynamic_qconfig.html | pytorch docs |
torch.Tensor.renorm_
Tensor.renorm_(p, dim, maxnorm) -> Tensor
In-place version of "renorm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.renorm_.html | pytorch docs |
CTCLoss
class torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False)
The Connectionist Temporal Classification loss.
Calculates loss between a continuous (unsegmented) time series and
a target sequence. CTCLoss sums over the probability of possible
alignments of input to target, producing a loss value which is
differentiable with respect to each input node. The alignment of
input to target is assumed to be "many-to-one", which limits the
length of the target sequence such that it must be \leq the input
length.
Parameters:
* blank (int, optional) -- blank label. Default 0.
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the output
losses will be divided by the target lengths and then the mean
over the batch is taken. Default: "'mean'"
| https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
zero_infinity (bool, optional) -- Whether to zero
infinite losses and the associated gradients. Default: "False"
Infinite losses mainly occur when the inputs are too short to
be aligned to the targets.
Shape:
* Log_probs: Tensor of size (T, N, C) or (T, C), where T =
\text{input length}, N = \text{batch size}, and C =
\text{number of classes (including blank)}. The logarithmized
probabilities of the outputs (e.g. obtained with
"torch.nn.functional.log_softmax()").
* Targets: Tensor of size (N, S) or
(\operatorname{sum}(\text{target\_lengths})), where N =
\text{batch size} and S = \text{max target length, if shape is
} (N, S). It represent the target sequences. Each element in
the target sequence is a class index. And the target index
cannot be blank (default=0). In the (N, S) form, targets are
padded to the length of the longest sequence, and stacked. In
| https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
the (\operatorname{sum}(\text{target_lengths})) form, the
targets are assumed to be un-padded and concatenated within 1
dimension.
* Input_lengths: Tuple or tensor of size (N) or (), where N =
\text{batch size}. It represent the lengths of the inputs
(must each be \leq T). And the lengths are specified for each
sequence to achieve masking under the assumption that
sequences are padded to equal lengths.
* Target_lengths: Tuple or tensor of size (N) or (), where N =
\text{batch size}. It represent lengths of the targets.
Lengths are specified for each sequence to achieve masking
under the assumption that sequences are padded to equal
lengths. If target shape is (N,S), target_lengths are
effectively the stop index s_n for each target sequence, such
that "target_n = targets[n,0:s_n]" for each target in a batch.
Lengths must each be \leq S If the targets are given as a 1d
| https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
tensor that is the concatenation of individual targets, the
target_lengths must add up to the total length of the tensor.
* Output: scalar. If "reduction" is "'none'", then (N) if input
is batched or () if input is unbatched, where N = \text{batch
size}.
Examples:
>>> # Target are to be padded
>>> T = 50 # Input sequence length
>>> C = 20 # Number of classes (including blank)
>>> N = 16 # Batch size
>>> S = 30 # Target sequence length of longest target in batch (padding length)
>>> S_min = 10 # Minimum target length, for demonstration purposes
>>>
>>> # Initialize random batch of input vectors, for *size = (T,N,C)
>>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
>>>
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
>>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)
>>>
| https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
>>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
>>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)
>>> ctc_loss = nn.CTCLoss()
>>> loss = ctc_loss(input, target, input_lengths, target_lengths)
>>> loss.backward()
>>>
>>>
>>> # Target are to be un-padded
>>> T = 50 # Input sequence length
>>> C = 20 # Number of classes (including blank)
>>> N = 16 # Batch size
>>>
>>> # Initialize random batch of input vectors, for *size = (T,N,C)
>>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
>>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
>>>
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
>>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)
>>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)
| https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
ctc_loss = nn.CTCLoss()
>>> loss = ctc_loss(input, target, input_lengths, target_lengths)
>>> loss.backward()
>>>
>>>
>>> # Target are to be un-padded and unbatched (effectively N=1)
>>> T = 50 # Input sequence length
>>> C = 20 # Number of classes (including blank)
>>>
>>> # Initialize random batch of input vectors, for *size = (T,C)
>>> input = torch.randn(T, C).log_softmax(2).detach().requires_grad_()
>>> input_lengths = torch.tensor(T, dtype=torch.long)
>>>
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
>>> target_lengths = torch.randint(low=1, high=T, size=(), dtype=torch.long)
>>> target = torch.randint(low=1, high=C, size=(target_lengths,), dtype=torch.long)
>>> ctc_loss = nn.CTCLoss()
>>> loss = ctc_loss(input, target, input_lengths, target_lengths)
>>> loss.backward()
Reference:
A. Graves et al.: Connectionist Temporal Classification: | https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
Labelling Unsegmented Sequence Data with Recurrent Neural
Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Note:
In order to use CuDNN, the following must be satisfied: "targets"
must be in concatenated format, all "input_lengths" must be *T*.
blank=0, "target_lengths" \leq 256, the integer arguments must be
of dtype "torch.int32".The regular implementation uses the (more
common in PyTorch) *torch.long* dtype.
Note:
In some circumstances when using the CUDA backend with CuDNN,
this operator may select a nondeterministic algorithm to increase
performance. If this is undesirable, you can try to make the
operation deterministic (potentially at a performance cost) by
setting "torch.backends.cudnn.deterministic = True". Please see
the notes on Reproducibility for background.
| https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html | pytorch docs |
torch.exp2
torch.exp2(input, *, out=None) -> Tensor
Alias for "torch.special.exp2()". | https://pytorch.org/docs/stable/generated/torch.exp2.html | pytorch docs |
torch.Tensor.log1p
Tensor.log1p() -> Tensor
See "torch.log1p()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log1p.html | pytorch docs |
torch.nn.functional.unfold
torch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1)
Extracts sliding local blocks from a batched input tensor.
Warning:
Currently, only 4-D input tensors (batched image-like tensors)
are supported.
Warning:
More than one element of the unfolded tensor may refer to a
single memory location. As a result, in-place operations
(especially ones that are vectorized) may result in incorrect
behavior. If you need to write to the tensor, please clone it
first.
See "torch.nn.Unfold" for details
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html | pytorch docs |
torch._foreach_erfc
torch._foreach_erfc(self: List[Tensor]) -> List[Tensor]
Apply "torch.erfc()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_erfc.html | pytorch docs |
torch.Tensor.sqrt
Tensor.sqrt() -> Tensor
See "torch.sqrt()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sqrt.html | pytorch docs |
torch.masked_select
torch.masked_select(input, mask, *, out=None) -> Tensor
Returns a new 1-D tensor which indexes the "input" tensor according
to the boolean mask "mask" which is a BoolTensor.
The shapes of the "mask" tensor and the "input" tensor don't need
to match, but they must be broadcastable.
Note:
The returned tensor does **not** use the same storage as the
original tensor
Parameters:
* input (Tensor) -- the input tensor.
* **mask** (*BoolTensor*) -- the tensor containing the binary
mask to index with
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.randn(3, 4)
>>> x
tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],
[-1.2035, 1.2252, 0.5002, 0.6248],
[ 0.1307, -2.0608, 0.1244, 2.0139]])
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, False, False, False],
[False, True, True, True],
| https://pytorch.org/docs/stable/generated/torch.masked_select.html | pytorch docs |
[False, True, True, True],
[False, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([ 1.2252, 0.5002, 0.6248, 2.0139]) | https://pytorch.org/docs/stable/generated/torch.masked_select.html | pytorch docs |
torch.Tensor.is_sparse_csr
Tensor.is_sparse_csr
Is "True" if the Tensor uses sparse CSR storage layout, "False"
otherwise. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_sparse_csr.html | pytorch docs |
torch.Tensor.allclose
Tensor.allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor
See "torch.allclose()" | https://pytorch.org/docs/stable/generated/torch.Tensor.allclose.html | pytorch docs |
torch.log2
torch.log2(input, *, out=None) -> Tensor
Returns a new tensor with the logarithm to the base 2 of the
elements of "input".
y_{i} = \log_{2} (x_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.rand(5)
>>> a
tensor([ 0.8419, 0.8003, 0.9971, 0.5287, 0.0490])
>>> torch.log2(a)
tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504])
| https://pytorch.org/docs/stable/generated/torch.log2.html | pytorch docs |
torch.autograd.profiler.profile.export_chrome_trace
profile.export_chrome_trace(path)
Exports an EventList as a Chrome tracing tools file.
The checkpoint can be later loaded and inspected under
"chrome://tracing" URL.
Parameters:
path (str) -- Path where the trace will be written. | https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.export_chrome_trace.html | pytorch docs |
FusedMovingAvgObsFakeQuantize
class torch.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize(observer=, quant_min=0, quant_max=255, **observer_kwargs)
Fused module that is used to observe the input tensor (compute
min/max), compute scale/zero_point and fake_quantize the tensor.
This module uses calculation similar MovingAverageMinMaxObserver
for the inputs, to compute the min/max values in order to compute
the scale/zero_point. The qscheme input in the observer is used to
differentiate between symmetric/affine quantization scheme.
The output of this module is given by x_out = (clamp(round(x/scale
+ zero_point), quant_min, quant_max)-zero_point)*scale
Similar to "FakeQuantize", and accepts the same attributes as the
base class. | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize.html | pytorch docs |
torch.linalg.diagonal
torch.linalg.diagonal(A, *, offset=0, dim1=- 2, dim2=- 1) -> Tensor
Alias for "torch.diagonal()" with defaults "dim1"= -2, "dim2"=
-1. | https://pytorch.org/docs/stable/generated/torch.linalg.diagonal.html | pytorch docs |
torch.sinc
torch.sinc(input, *, out=None) -> Tensor
Alias for "torch.special.sinc()". | https://pytorch.org/docs/stable/generated/torch.sinc.html | pytorch docs |
quantize_qat
class torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False)
Do quantization aware training and output a quantized model
Parameters:
* model -- input model
* **run_fn** -- a function for evaluating the prepared model,
can be a function that simply runs the prepared model or a
training loop
* **run_args** -- positional arguments for *run_fn*
Returns:
Quantized model. | https://pytorch.org/docs/stable/generated/torch.quantization.quantize_qat.html | pytorch docs |
torch.Tensor.ceil_
Tensor.ceil_() -> Tensor
In-place version of "ceil()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ceil_.html | pytorch docs |
torch.Tensor.index_copy
Tensor.index_copy(dim, index, tensor2) -> Tensor
Out-of-place version of "torch.Tensor.index_copy_()". | https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy.html | pytorch docs |
Stream
class torch.cuda.Stream(device=None, priority=0, **kwargs)
Wrapper around a CUDA stream.
A CUDA stream is a linear sequence of execution that belongs to a
specific device, independent from other streams. See CUDA
semantics for details.
Parameters:
* device (torch.device or int, optional) -- a
device on which to allocate the stream. If "device" is "None"
(default) or a negative integer, this will use the current
device.
* **priority** (*int**, **optional*) -- priority of the stream.
Can be either -1 (high priority) or 0 (low priority). By
default, streams have priority 0.
Note:
Although CUDA versions >= 11 support more than two levels of
priorities, in PyTorch, we only support two levels of priorities.
query()
Checks if all the work submitted has been completed.
Returns:
A boolean indicating if all kernels in this stream are
completed.
| https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html | pytorch docs |
completed.
record_event(event=None)
Records an event.
Parameters:
**event** (*torch.cuda.Event**, **optional*) -- event to
record. If not given, a new one will be allocated.
Returns:
Recorded event.
synchronize()
Wait for all the kernels in this stream to complete.
Note:
This is a wrapper around "cudaStreamSynchronize()": see CUDA
Stream documentation for more info.
wait_event(event)
Makes all future work submitted to the stream wait for an event.
Parameters:
**event** (*torch.cuda.Event*) -- an event to wait for.
Note:
This is a wrapper around "cudaStreamWaitEvent()": see CUDA
Stream documentation for more info.This function returns
without waiting for "event": only future operations are
affected.
wait_stream(stream)
Synchronizes with another stream.
All future work submitted to this stream will wait until all
| https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html | pytorch docs |
kernels submitted to a given stream at the time of call
complete.
Parameters:
**stream** (*Stream*) -- a stream to synchronize.
Note:
This function returns without waiting for currently enqueued
kernels in "stream": only future operations are affected.
| https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html | pytorch docs |
torch.nn.functional.pad
torch.nn.functional.pad(input, pad, mode='constant', value=None) -> Tensor
Pads tensor.
Padding size:
The padding size by which to pad some dimensions of "input" are
described starting from the last dimension and moving forward.
\left\lfloor\frac{\text{len(pad)}}{2}\right\rfloor dimensions of
"input" will be padded. For example, to pad only the last
dimension of the input tensor, then "pad" has the form
(\text{padding_left}, \text{padding_right}); to pad the last 2
dimensions of the input tensor, then use (\text{padding_left},
\text{padding_right}, \text{padding_top},
\text{padding_bottom}); to pad the last 3 dimensions, use
(\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom}
\text{padding_front}, \text{padding_back}).
Padding mode:
See "torch.nn.ConstantPad2d", "torch.nn.ReflectionPad2d", and | https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html | pytorch docs |
"torch.nn.ReplicationPad2d" for concrete examples on how each of
the padding modes works. Constant padding is implemented for
arbitrary dimensions. Replicate and reflection padding are
implemented for padding the last 3 dimensions of a 4D or 5D
input tensor, the last 2 dimensions of a 3D or 4D input tensor,
or the last dimension of a 2D or 3D input tensor.
Note:
When using the CUDA backend, this operation may induce
nondeterministic behaviour in its backward pass that is not
easily switched off. Please see the notes on Reproducibility for
background.
Parameters:
* input (Tensor) -- N-dimensional tensor
* **pad** (*tuple*) -- m-elements tuple, where \frac{m}{2} \leq
input dimensions and m is even.
* **mode** -- "'constant'", "'reflect'", "'replicate'" or
"'circular'". Default: "'constant'"
* **value** -- fill value for "'constant'" padding. Default: "0"
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html | pytorch docs |
Examples:
>>> t4d = torch.empty(3, 3, 4, 2)
>>> p1d = (1, 1) # pad last dim by 1 on each side
>>> out = F.pad(t4d, p1d, "constant", 0) # effectively zero padding
>>> print(out.size())
torch.Size([3, 3, 4, 4])
>>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2)
>>> out = F.pad(t4d, p2d, "constant", 0)
>>> print(out.size())
torch.Size([3, 3, 8, 4])
>>> t4d = torch.empty(3, 3, 4, 2)
>>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3)
>>> out = F.pad(t4d, p3d, "constant", 0)
>>> print(out.size())
torch.Size([3, 9, 7, 3])
| https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html | pytorch docs |
torch.sym_not
torch.sym_not(a)
SymInt-aware utility for logical negation.
Parameters:
a (SymBool or bool) -- Object to negate | https://pytorch.org/docs/stable/generated/torch.sym_not.html | pytorch docs |
torch.Tensor.requires_grad_
Tensor.requires_grad_(requires_grad=True) -> Tensor
Change if autograd should record operations on this tensor: sets
this tensor's "requires_grad" attribute in-place. Returns this
tensor.
"requires_grad_()"'s main use case is to tell autograd to begin
recording operations on a Tensor "tensor". If "tensor" has
"requires_grad=False" (because it was obtained through a
DataLoader, or required preprocessing or initialization),
"tensor.requires_grad_()" makes it so that autograd will begin to
record operations on "tensor".
Parameters:
requires_grad (bool) -- If autograd should record
operations on this tensor. Default: "True".
Example:
>>> # Let's say we want to preprocess some saved weights and use
>>> # the result as new weights.
>>> saved_weights = [0.1, 0.2, 0.3, 0.25]
>>> loaded_weights = torch.tensor(saved_weights)
| https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad_.html | pytorch docs |
weights = preprocess(loaded_weights) # some function
>>> weights
tensor([-0.5503, 0.4926, -2.1158, -0.8303])
>>> # Now, start to record operations done to weights
>>> weights.requires_grad_()
>>> out = weights.pow(2).sum()
>>> out.backward()
>>> weights.grad
tensor([-1.1007, 0.9853, -4.2316, -1.6606])
| https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad_.html | pytorch docs |
torch.nn.functional.softshrink
torch.nn.functional.softshrink(input, lambd=0.5) -> Tensor
Applies the soft shrinkage function elementwise
See "Softshrink" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.softshrink.html | pytorch docs |
torch.cuda.current_stream
torch.cuda.current_stream(device=None)
Returns the currently selected "Stream" for a given device.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns the currently selected "Stream" for the current
device, given by "current_device()", if "device" is "None"
(default).
Return type:
Stream | https://pytorch.org/docs/stable/generated/torch.cuda.current_stream.html | pytorch docs |
torch.Tensor.bitwise_xor_
Tensor.bitwise_xor_() -> Tensor
In-place version of "bitwise_xor()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_xor_.html | pytorch docs |
torch.Tensor.contiguous
Tensor.contiguous(memory_format=torch.contiguous_format) -> Tensor
Returns a contiguous in memory tensor containing the same data as
"self" tensor. If "self" tensor is already in the specified memory
format, this function returns the "self" tensor.
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.contiguous_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html | pytorch docs |
torch.std_mean
torch.std_mean(input, dim=None, *, correction=1, keepdim=False, out=None)
Calculates the standard deviation and mean over the dimensions
specified by "dim". "dim" can be a single dimension, list of
dimensions, or "None" to reduce over all dimensions.
The standard deviation (\sigma) is calculated as
\sigma = \sqrt{\frac{1}{N - \delta
N}\sum_{i=0}^{N-1}(x_i-\bar{x})^2}
where x is the sample set of elements, \bar{x} is the sample mean,
N is the number of samples and \delta N is the "correction".
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- the
| https://pytorch.org/docs/stable/generated/torch.std_mean.html | pytorch docs |
dimension or dimensions to reduce. If "None", all dimensions
are reduced.
Keyword Arguments:
* correction (int) --
difference between the sample size and sample degrees of
freedom. Defaults to Bessel's correction, "correction=1".
Changed in version 2.0: Previously this argument was called
"unbiased" and was a boolean with "True" corresponding to
"correction=1" and "False" being "correction=0".
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
* **out** (*Tensor**, **optional*) -- the output tensor.
Returns:
A tuple (std, mean) containing the standard deviation and mean.
-[ Example ]-
a = torch.tensor(
... [[ 0.2035, 1.2959, 1.8101, -0.4644],
... [ 1.5027, -0.3270, 0.5905, 0.6538],
... [-1.5745, 1.3330, -0.5596, -0.6548],
... [ 0.1264, -0.5080, 1.6420, 0.1992]])
torch.std_mean(a, dim=0, keepdim=True)
| https://pytorch.org/docs/stable/generated/torch.std_mean.html | pytorch docs |
torch.std_mean(a, dim=0, keepdim=True)
(tensor([[1.2620, 1.0028, 1.0957, 0.6038]]),
tensor([[ 0.0645, 0.4485, 0.8707, -0.0665]]))
| https://pytorch.org/docs/stable/generated/torch.std_mean.html | pytorch docs |
torch.cuda.manual_seed_all
torch.cuda.manual_seed_all(seed)
Sets the seed for generating random numbers on all GPUs. It's safe
to call this function if CUDA is not available; in that case, it is
silently ignored.
Parameters:
seed (int) -- The desired seed. | https://pytorch.org/docs/stable/generated/torch.cuda.manual_seed_all.html | pytorch docs |
torch.nn.functional.adaptive_avg_pool1d
torch.nn.functional.adaptive_avg_pool1d(input, output_size) -> Tensor
Applies a 1D adaptive average pooling over an input signal composed
of several input planes.
See "AdaptiveAvgPool1d" for details and output shape.
Parameters:
output_size -- the target output size (single integer) | https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool1d.html | pytorch docs |
Flatten
class torch.nn.Flatten(start_dim=1, end_dim=- 1)
Flattens a contiguous range of dims into a tensor. For use with
"Sequential".
Shape:
* Input: (, S_{\text{start}},..., S_{i}, ..., S_{\text{end}},
),' where S_{i} is the size at dimension i and * means any
number of dimensions including none.
* Output: (*, \prod_{i=\text{start}}^{\text{end}} S_{i}, *).
Parameters:
* start_dim (int) -- first dim to flatten (default = 1).
* **end_dim** (*int*) -- last dim to flatten (default = -1).
Examples::
>>> input = torch.randn(32, 1, 5, 5)
>>> # With default parameters
>>> m = nn.Flatten()
>>> output = m(input)
>>> output.size()
torch.Size([32, 25])
>>> # With non-default parameters
>>> m = nn.Flatten(0, 2)
>>> output = m(input)
>>> output.size()
torch.Size([160, 5]) | https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html | pytorch docs |
torch.Tensor.mul
Tensor.mul(value) -> Tensor
See "torch.mul()". | https://pytorch.org/docs/stable/generated/torch.Tensor.mul.html | pytorch docs |
torch.nn.utils.rnn.pad_packed_sequence
torch.nn.utils.rnn.pad_packed_sequence(sequence, batch_first=False, padding_value=0.0, total_length=None)
Pads a packed batch of variable length sequences.
It is an inverse operation to "pack_padded_sequence()".
The returned Tensor's data will be of size "T x B x ", where T
is the length of the longest sequence and B is the batch size. If
"batch_first" is True, the data will be transposed into "B x T x "
format.
-[ Example ]-
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
seq = torch.tensor([[1, 2, 0], [3, 0, 0], [4, 5, 6]])
lens = [2, 1, 3]
packed = pack_padded_sequence(seq, lens, batch_first=True, enforce_sorted=False)
packed
PackedSequence(data=tensor([4, 1, 3, 5, 2, 6]), batch_sizes=tensor([3, 2, 1]),
sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0]))
| https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html | pytorch docs |
seq_unpacked, lens_unpacked = pad_packed_sequence(packed, batch_first=True)
seq_unpacked
tensor([[1, 2, 0],
[3, 0, 0],
[4, 5, 6]])
lens_unpacked
tensor([2, 1, 3])
Note:
"total_length" is useful to implement the "pack sequence ->
recurrent network -> unpack sequence" pattern in a "Module"
wrapped in "DataParallel". See this FAQ section for details.
Parameters:
* sequence (PackedSequence) -- batch to pad
* **batch_first** (*bool**, **optional*) -- if "True", the
output will be in "B x T x *" format.
* **padding_value** (*float**, **optional*) -- values for padded
elements.
* **total_length** (*int**, **optional*) -- if not "None", the
output will be padded to have length "total_length". This
method will throw "ValueError" if "total_length" is less than
the max sequence length in "sequence".
Returns: | https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html | pytorch docs |
Returns:
Tuple of Tensor containing the padded sequence, and a Tensor
containing the list of lengths of each sequence in the batch.
Batch elements will be re-ordered as they were ordered
originally when the batch was passed to "pack_padded_sequence"
or "pack_sequence".
Return type:
Tuple[Tensor, Tensor] | https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html | pytorch docs |
torch.Tensor.arctan2_
Tensor.arctan2_()
atan2_(other) -> Tensor
In-place version of "arctan2()" | https://pytorch.org/docs/stable/generated/torch.Tensor.arctan2_.html | pytorch docs |
ConvBn1d
class torch.ao.nn.intrinsic.ConvBn1d(conv, bn)
This is a sequential container which calls the Conv 1d and Batch
Norm 1d modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn1d.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.