text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch._foreach_floor
torch._foreach_floor(self: List[Tensor]) -> List[Tensor]
Apply "torch.floor()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_floor.html | pytorch docs |
torch.matrix_exp
torch.matrix_exp(A) -> Tensor
Alias for "torch.linalg.matrix_exp()". | https://pytorch.org/docs/stable/generated/torch.matrix_exp.html | pytorch docs |
torch.nanquantile
torch.nanquantile(input, q, dim=None, keepdim=False, *, interpolation='linear', out=None) -> Tensor
This is a variant of "torch.quantile()" that "ignores" "NaN"
values, computing the quantiles "q" as if "NaN" values in "input"
did not exist. If all values in a reduced row are "NaN" then the
quantiles for that reduction will be "NaN". See the documentation
for "torch.quantile()".
Parameters:
* input (Tensor) -- the input tensor.
* **q** (*float** or **Tensor*) -- a scalar or 1D tensor of
quantile values in the range [0, 1]
* **dim** (*int*) -- the dimension to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
* interpolation (str) -- interpolation method to use when
the desired quantile lies between two data points. Can be
"linear", "lower", "higher", "midpoint" and "nearest". Default
is "linear". | https://pytorch.org/docs/stable/generated/torch.nanquantile.html | pytorch docs |
is "linear".
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> t = torch.tensor([float('nan'), 1, 2])
>>> t.quantile(0.5)
tensor(nan)
>>> t.nanquantile(0.5)
tensor(1.5000)
>>> t = torch.tensor([[float('nan'), float('nan')], [1, 2]])
>>> t
tensor([[nan, nan],
[1., 2.]])
>>> t.nanquantile(0.5, dim=0)
tensor([1., 2.])
>>> t.nanquantile(0.5, dim=1)
tensor([ nan, 1.5000])
| https://pytorch.org/docs/stable/generated/torch.nanquantile.html | pytorch docs |
torch.aminmax
torch.aminmax(input, *, dim=None, keepdim=False, out=None) -> (Tensor min, Tensor max)
Computes the minimum and maximum values of the "input" tensor.
Parameters:
input (Tensor) -- The input tensor
Keyword Arguments:
* dim (Optional[int]) -- The dimension along which
to compute the values. If None, computes the values over the
entire "input" tensor. Default is None.
* **keepdim** (*bool*) -- If *True*, the reduced dimensions will
be kept in the output tensor as dimensions with size 1 for
broadcasting, otherwise they will be removed, as if calling
("torch.squeeze()"). Default is *False*.
* **out** (*Optional**[**Tuple**[**Tensor**, **Tensor**]**]*) --
Optional tensors on which to write the result. Must have the
same shape and dtype as the expected output. Default is
*None*.
Returns:
A named tuple (min, max) containing the minimum and maximum | https://pytorch.org/docs/stable/generated/torch.aminmax.html | pytorch docs |
values.
Raises:
RuntimeError -- If any of the dimensions to compute the
values over has size 0.
Note:
NaN values are propagated to the output if at least one value is
NaN.
See also:
"torch.amin()" computes just the minimum value "torch.amax()"
computes just the maximum value
Example:
>>> torch.aminmax(torch.tensor([1, -3, 5]))
torch.return_types.aminmax(
min=tensor(-3),
max=tensor(5))
>>> # aminmax propagates NaNs
>>> torch.aminmax(torch.tensor([1, -3, 5, torch.nan]))
torch.return_types.aminmax(
min=tensor(nan),
max=tensor(nan))
>>> t = torch.arange(10).view(2, 5)
>>> t
tensor([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
>>> t.aminmax(dim=0, keepdim=True)
torch.return_types.aminmax(
min=tensor([[0, 1, 2, 3, 4]]),
max=tensor([[5, 6, 7, 8, 9]]))
| https://pytorch.org/docs/stable/generated/torch.aminmax.html | pytorch docs |
torch.autograd.functional.jacobian
torch.autograd.functional.jacobian(func, inputs, create_graph=False, strict=False, vectorize=False, strategy='reverse-mode')
Function that computes the Jacobian of a given function.
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a tuple of Tensors or a Tensor.
* **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the
function "func".
* **create_graph** (*bool**, **optional*) -- If "True", the
Jacobian will be computed in a differentiable manner. Note
that when "strict" is "False", the result can not require
gradients or be disconnected from the inputs. Defaults to
"False".
* **strict** (*bool**, **optional*) -- If "True", an error will
be raised when we detect that there exists an input such that
all the outputs are independent of it. If "False", we return a
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html | pytorch docs |
Tensor of zeros as the jacobian for said inputs, which is the
expected mathematical value. Defaults to "False".
* **vectorize** (*bool**, **optional*) -- This feature is
experimental. Please consider using "torch.func.jacrev()" or
"torch.func.jacfwd()" instead if you are looking for something
less experimental and more performant. When computing the
jacobian, usually we invoke "autograd.grad" once per row of
the jacobian. If this flag is "True", we perform only a single
"autograd.grad" call with "batched_grad=True" which uses the
vmap prototype feature. Though this should lead to performance
improvements in many cases, because this feature is still
experimental, there may be performance cliffs. See
"torch.autograd.grad()"'s "batched_grad" parameter for more
information.
* **strategy** (*str**, **optional*) -- Set to ""forward-mode""
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html | pytorch docs |
or ""reverse-mode"" to determine whether the Jacobian will be
computed with forward or reverse mode AD. Currently,
""forward-mode"" requires "vectorized=True". Defaults to
""reverse-mode"". If "func" has more outputs than inputs,
""forward-mode"" tends to be more performant. Otherwise,
prefer to use ""reverse-mode"".
Returns:
if there is a single input and output, this will be a single
Tensor containing the Jacobian for the linearized inputs and
output. If one of the two is a tuple, then the Jacobian will be
a tuple of Tensors. If both of them are tuples, then the
Jacobian will be a tuple of tuple of Tensors where
"Jacobian[i][j]" will contain the Jacobian of the "i"th output
and "j"th input and will have as size the concatenation of the
sizes of the corresponding output and the corresponding input
and will have same dtype and device as the corresponding input. | https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html | pytorch docs |
If strategy is "forward-mode", the dtype will be that of the
output; otherwise, the input.
Return type:
Jacobian (Tensor or nested tuple of Tensors)
-[ Example ]-
def exp_reducer(x):
... return x.exp().sum(dim=1)
inputs = torch.rand(2, 2)
jacobian(exp_reducer, inputs)
tensor([[[1.4917, 2.4352],
[0.0000, 0.0000]],
[[0.0000, 0.0000],
[2.4369, 2.3799]]])
jacobian(exp_reducer, inputs, create_graph=True)
tensor([[[1.4917, 2.4352],
[0.0000, 0.0000]],
[[0.0000, 0.0000],
[2.4369, 2.3799]]], grad_fn=)
def exp_adder(x, y):
... return 2 * x.exp() + 3 * y
inputs = (torch.rand(2), torch.rand(2))
jacobian(exp_adder, inputs)
(tensor([[2.8052, 0.0000],
[0.0000, 3.3963]]),
tensor([[3., 0.],
[0., 3.]]))
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html | pytorch docs |
BatchNorm3d
class torch.ao.nn.quantized.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)
This is the quantized version of "BatchNorm3d". | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.BatchNorm3d.html | pytorch docs |
torch.nonzero
torch.nonzero(input, *, out=None, as_tuple=False) -> LongTensor or tuple of LongTensors
Note:
"torch.nonzero(..., as_tuple=False)" (default) returns a 2-D
tensor where each row is the index for a nonzero
value."torch.nonzero(..., as_tuple=True)" returns a tuple of 1-D
index tensors, allowing for advanced indexing, so
"x[x.nonzero(as_tuple=True)]" gives all nonzero values of tensor
"x". Of the returned tuple, each index tensor contains nonzero
indices for a certain dimension.See below for more details on the
two behaviors.When "input" is on CUDA, "torch.nonzero()" causes
host-device synchronization.
When "as_tuple" is "False" (default):
Returns a tensor containing the indices of all non-zero elements of
"input". Each row in the result contains the indices of a non-zero
element in "input". The result is sorted lexicographically, with
the last index changing the fastest (C-style). | https://pytorch.org/docs/stable/generated/torch.nonzero.html | pytorch docs |
the last index changing the fastest (C-style).
If "input" has n dimensions, then the resulting indices tensor
"out" is of size (z \times n), where z is the total number of non-
zero elements in the "input" tensor.
When "as_tuple" is "True":
Returns a tuple of 1-D tensors, one for each dimension in "input",
each containing the indices (in that dimension) of all non-zero
elements of "input" .
If "input" has n dimensions, then the resulting tuple contains n
tensors of size z, where z is the total number of non-zero elements
in the "input" tensor.
As a special case, when "input" has zero dimensions and a nonzero
scalar value, it is treated as a one-dimensional tensor with one
element.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (LongTensor, optional) -- the output tensor
containing indices
Returns:
If "as_tuple" is "False", the output tensor containing indices. | https://pytorch.org/docs/stable/generated/torch.nonzero.html | pytorch docs |
If "as_tuple" is "True", one 1-D tensor for each dimension,
containing the indices of each nonzero element along that
dimension.
Return type:
LongTensor or tuple of LongTensor
Example:
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))
tensor([[ 0],
[ 1],
[ 2],
[ 4]])
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.0, 0.0, 1.2, 0.0],
... [0.0, 0.0, 0.0,-0.4]]))
tensor([[ 0, 0],
[ 1, 1],
[ 2, 2],
[ 3, 3]])
>>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]), as_tuple=True)
(tensor([0, 1, 2, 4]),)
>>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],
... [0.0, 0.4, 0.0, 0.0],
... [0.0, 0.0, 1.2, 0.0],
| https://pytorch.org/docs/stable/generated/torch.nonzero.html | pytorch docs |
... [0.0, 0.0, 0.0,-0.4]]), as_tuple=True)
(tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))
>>> torch.nonzero(torch.tensor(5), as_tuple=True)
(tensor([0]),) | https://pytorch.org/docs/stable/generated/torch.nonzero.html | pytorch docs |
torch.set_default_dtype
torch.set_default_dtype(d)
Sets the default floating point dtype to "d". Supports
torch.float32 and torch.float64 as inputs. Other dtypes may be
accepted without complaint but are not supported and are unlikely
to work as expected.
When PyTorch is initialized its default floating point dtype is
torch.float32, and the intent of set_default_dtype(torch.float64)
is to facilitate NumPy-like type inference. The default floating
point dtype is used to:
Implicitly determine the default complex dtype. When the default
floating point type is float32 the default complex dtype is
complex64, and when the default floating point type is float64
the default complex type is complex128.
Infer the dtype for tensors constructed using Python floats or
complex Python numbers. See examples below.
Determine the result of type promotion between bool and integer
| https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html | pytorch docs |
tensors and Python floats and complex Python numbers.
Parameters:
d ("torch.dtype") -- the floating point dtype to make the
default. Either torch.float32 or torch.float64.
-[ Example ]-
initial default for floating point is torch.float32
Python floats are interpreted as float32
torch.tensor([1.2, 3]).dtype
torch.float32
initial default for floating point is torch.complex64
Complex Python numbers are interpreted as complex64
torch.tensor([1.2, 3j]).dtype
torch.complex64
torch.set_default_dtype(torch.float64)
Python floats are now interpreted as float64
torch.tensor([1.2, 3]).dtype # a new floating point tensor
torch.float64
Complex Python numbers are now interpreted as complex128
torch.tensor([1.2, 3j]).dtype # a new complex tensor
torch.complex128
| https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html | pytorch docs |
torch.arctan2
torch.arctan2(input, other, *, out=None) -> Tensor
Alias for "torch.atan2()". | https://pytorch.org/docs/stable/generated/torch.arctan2.html | pytorch docs |
torch.Tensor.trunc_
Tensor.trunc_() -> Tensor
In-place version of "trunc()" | https://pytorch.org/docs/stable/generated/torch.Tensor.trunc_.html | pytorch docs |
RandomStructured
class torch.nn.utils.prune.RandomStructured(amount, dim=- 1)
Prune entire (currently unpruned) channels in a tensor at random.
Parameters:
* amount (int or float) -- quantity of parameters to
prune. If "float", should be between 0.0 and 1.0 and represent
the fraction of parameters to prune. If "int", it represents
the absolute number of parameters to prune.
* **dim** (*int**, **optional*) -- index of the dim along which
we define channels to prune. Default: -1.
classmethod apply(module, name, amount, dim=- 1)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
* **module** (*nn.Module*) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html | pytorch docs |
pruning will act.
* **amount** (*int** or **float*) -- quantity of parameters
to prune. If "float", should be between 0.0 and 1.0 and
represent the fraction of parameters to prune. If "int", it
represents the absolute number of parameters to prune.
* **dim** (*int**, **optional*) -- index of the dim along
which we define channels to prune. Default: -1.
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
Return type:
pruned_tensor (torch.Tensor)
compute_mask(t, default_mask)
Computes and returns a mask for the input tensor "t". Starting
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html | pytorch docs |
from a base "default_mask" (which should be a mask of ones if
the tensor has not been pruned yet), generate a random mask to
apply on top of the "default_mask" by randomly zeroing out
channels along the specified dim of the tensor.
Parameters:
* **t** (*torch.Tensor*) -- tensor representing the parameter
to prune
* **default_mask** (*torch.Tensor*) -- Base mask from
previous pruning iterations, that need to be respected
after the new mask is applied. Same dims as "t".
Returns:
mask to apply to "t", of same dims as "t"
Return type:
mask (torch.Tensor)
Raises:
**IndexError** -- if "self.dim >= len(t.shape)"
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html | pytorch docs |
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
Returns:
pruned version of tensor "t".
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html | pytorch docs |
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html | pytorch docs |
torch.cuda.get_rng_state_all
torch.cuda.get_rng_state_all()
Returns a list of ByteTensor representing the random number states
of all devices.
Return type:
List[Tensor] | https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state_all.html | pytorch docs |
torch.fix
torch.fix(input, *, out=None) -> Tensor
Alias for "torch.trunc()" | https://pytorch.org/docs/stable/generated/torch.fix.html | pytorch docs |
torch.cuda.seed_all
torch.cuda.seed_all()
Sets the seed for generating random numbers to a random number on
all GPUs. It's safe to call this function if CUDA is not available;
in that case, it is silently ignored. | https://pytorch.org/docs/stable/generated/torch.cuda.seed_all.html | pytorch docs |
set_grad_enabled
class torch.set_grad_enabled(mode)
Context-manager that sets gradient calculation on or off.
"set_grad_enabled" will enable or disable grads based on its
argument "mode". It can be used as a context-manager or as a
function.
This context manager is thread local; it will not affect
computation in other threads.
Parameters:
mode (bool) -- Flag whether to enable grad ("True"), or
disable ("False"). This can be used to conditionally enable
gradients.
Note:
set_grad_enabled is one of several mechanisms that can enable or
disable gradients locally see Locally disabling gradient
computation for more information on how they compare.
Note:
This API does not apply to forward-mode AD.
Example::
>>> x = torch.tensor([1.], requires_grad=True)
>>> is_train = False
>>> with torch.set_grad_enabled(is_train):
... y = x * 2
>>> y.requires_grad
False | https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html | pytorch docs |
y.requires_grad
False
>>> _ = torch.set_grad_enabled(True)
>>> y = x * 2
>>> y.requires_grad
True
>>> _ = torch.set_grad_enabled(False)
>>> y = x * 2
>>> y.requires_grad
False
| https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html | pytorch docs |
torch.nn.functional.fractional_max_pool2d
torch.nn.functional.fractional_max_pool2d(args, *kwargs)
Applies 2D fractional max pooling over an input signal composed of
several input planes.
Fractional MaxPooling is described in detail in the paper
Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in kH \times kW regions by a
stochastic step size determined by the target output size. The
number of output features is equal to the number of input planes.
Parameters:
* kernel_size -- the size of the window to take a max over.
Can be a single number k (for a square kernel of k \times k)
or a tuple (kH, kW)
* **output_size** -- the target output size of the image of the
form oH \times oW. Can be a tuple *(oH, oW)* or a single
number oH for a square image oH \times oH
* **output_ratio** -- If one wants to have an output size as a
| https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool2d.html | pytorch docs |
ratio of the input size, this option can be given. This has to
be a number or tuple in the range (0, 1)
* **return_indices** -- if "True", will return the indices along
with the outputs. Useful to pass to "max_unpool2d()".
Examples::
>>> input = torch.randn(20, 16, 50, 32)
>>> # pool of square window of size=3, and target output size 13x12
>>> F.fractional_max_pool2d(input, 3, output_size=(13, 12))
>>> # pool of square window and target output size being half of input image size
>>> F.fractional_max_pool2d(input, 3, output_ratio=(0.5, 0.5)) | https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool2d.html | pytorch docs |
torch.nn.functional.batch_norm
torch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)
Applies Batch Normalization for each channel across a batch of
data.
See "BatchNorm1d", "BatchNorm2d", "BatchNorm3d" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.batch_norm.html | pytorch docs |
torch.jit.set_fusion_strategy
torch.jit.set_fusion_strategy(strategy)
Sets the type and number of specializations that can occur during
fusion.
Usage: provide a list of pairs (type, depth) where type is one of
"STATIC" or "DYNAMIC" and depth is an integer.
Behavior - static vs dynamic:
In STATIC fusion, fused ops are compiled to have fixed input
shapes. The shape is determined based on some initial profiling
runs. In DYNAMIC fusion, fused ops are compiled to have variable
input shapes, so that multiple shapes are possible.
In both cases, we also recompile on new striding behavior, device,
or dtype.
Behavior - fallback functions & depth:
When an input doesn't match the format required by the
specialized compiled op, it will run a fallback function.
Fallback functions are recursively be compiled and specialized
based on the observed tensor shapes. Since compilation can be | https://pytorch.org/docs/stable/generated/torch.jit.set_fusion_strategy.html | pytorch docs |
slow, the "depth" parameter is provided to limit the number of
specializations that can be compiled, before giving up on
recompiling and falling back to a completely un-fused, un-
specialized implementation.
The list of (type, depth) pairs controls the type of
specializations and the number of specializations. For example:
[("STATIC", 2), ("DYNAMIC", 2)] indicates that the first two
specializations will use static fusions, the following two
specializations will use dynamic fusion, and any inputs that
satisfy none of the 4 options will run an unfused implementation.
NB: in the future, if more as more fusion backends are added there
may be more granular apis for specific fusers. | https://pytorch.org/docs/stable/generated/torch.jit.set_fusion_strategy.html | pytorch docs |
torch.Tensor.lgamma_
Tensor.lgamma_() -> Tensor
In-place version of "lgamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.lgamma_.html | pytorch docs |
torch.linalg.matmul
torch.linalg.matmul(input, other, *, out=None) -> Tensor
Alias for "torch.matmul()" | https://pytorch.org/docs/stable/generated/torch.linalg.matmul.html | pytorch docs |
AdaptiveMaxPool2d
class torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False)
Applies a 2D adaptive max pooling over an input signal composed of
several input planes.
The output is of size H_{out} \times W_{out}, for any input size.
The number of output features is equal to the number of input
planes.
Parameters:
* output_size (Union[int, None,
Tuple[Optional[int],
Optional[int]]]) -- the target output size of the
image of the form H_{out} \times W_{out}. Can be a tuple
(H_{out}, W_{out}) or a single H_{out} for a square image
H_{out} \times H_{out}. H_{out} and W_{out} can be either a
"int", or "None" which means the size will be the same as that
of the input.
* **return_indices** (*bool*) -- if "True", will return the
indices along with the outputs. Useful to pass to
nn.MaxUnpool2d. Default: "False"
Shape: | https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool2d.html | pytorch docs |
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where (H_{out}, W_{out})=\text{output\_size}.
-[ Examples ]-
target output size of 5x7
m = nn.AdaptiveMaxPool2d((5, 7))
input = torch.randn(1, 64, 8, 9)
output = m(input)
target output size of 7x7 (square)
m = nn.AdaptiveMaxPool2d(7)
input = torch.randn(1, 64, 10, 9)
output = m(input)
target output size of 10x7
m = nn.AdaptiveMaxPool2d((None, 7))
input = torch.randn(1, 64, 10, 9)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool2d.html | pytorch docs |
torch.Tensor.slice_scatter
Tensor.slice_scatter(src, dim=0, start=None, end=None, step=1) -> Tensor
See "torch.slice_scatter()" | https://pytorch.org/docs/stable/generated/torch.Tensor.slice_scatter.html | pytorch docs |
torch.Tensor.cov
Tensor.cov(*, correction=1, fweights=None, aweights=None) -> Tensor
See "torch.cov()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cov.html | pytorch docs |
UpsamplingBilinear2d
class torch.nn.UpsamplingBilinear2d(size=None, scale_factor=None)
Applies a 2D bilinear upsampling to an input signal composed of
several input channels.
To specify the scale, it takes either the "size" or the
"scale_factor" as it's constructor argument.
When "size" is given, it is the output size of the image (h, w).
Parameters:
* size (int or Tuple[int, int],
optional) -- output spatial sizes
* **scale_factor** (*float** or **Tuple**[**float**,
**float**]**, **optional*) -- multiplier for spatial size.
Warning:
This class is deprecated in favor of "interpolate()". It is
equivalent to "nn.functional.interpolate(..., mode='bilinear',
align_corners=True)".
Shape:
* Input: (N, C, H_{in}, W_{in})
* Output: (N, C, H_{out}, W_{out}) where
H_{out} = \left\lfloor H_{in} \times \text{scale\_factor}
\right\rfloor
| https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingBilinear2d.html | pytorch docs |
\right\rfloor
W_{out} = \left\lfloor W_{in} \times \text{scale\_factor}
\right\rfloor
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)
>>> input
tensor([[[[1., 2.],
[3., 4.]]]])
>>> m = nn.UpsamplingBilinear2d(scale_factor=2)
>>> m(input)
tensor([[[[1.0000, 1.3333, 1.6667, 2.0000],
[1.6667, 2.0000, 2.3333, 2.6667],
[2.3333, 2.6667, 3.0000, 3.3333],
[3.0000, 3.3333, 3.6667, 4.0000]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingBilinear2d.html | pytorch docs |
AvgPool2d
class torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)
Applies a 2D average pooling over an input signal composed of
several input planes.
In the simplest case, the output value of the layer with input size
(N, C, H, W), output (N, C, H_{out}, W_{out}) and "kernel_size"
(kH, kW) can be precisely described as:
out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1}
\sum_{n=0}^{kW-1} input(N_i, C_j,
stride[0] \times h + m, stride[1] \times w + n)
If "padding" is non-zero, then the input is implicitly zero-padded
on both sides for "padding" number of points.
Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds
if they start within the left padding or the input. Sliding
windows that would start in the right padded region are ignored.
The parameters "kernel_size", "stride", "padding" can either be: | https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html | pytorch docs |
a single "int" -- in which case the same value is used for the
height and width dimension
a "tuple" of two ints -- in which case, the first int is
used for the height dimension, and the second int for the
width dimension
Parameters:
* kernel_size (Union[int, Tuple[int,
int]]) -- the size of the window
* **stride** (*Union**[**int**, **Tuple**[**int**, **int**]**]*)
-- the stride of the window. Default value is "kernel_size"
* **padding** (*Union**[**int**, **Tuple**[**int**,
**int**]**]*) -- implicit zero padding to be added on both
sides
* **ceil_mode** (*bool*) -- when True, will use *ceil* instead
of *floor* to compute the output shape
* **count_include_pad** (*bool*) -- when True, will include the
zero-padding in the averaging calculation
* **divisor_override** (*Optional**[**int**]*) -- if specified,
| https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html | pytorch docs |
it will be used as divisor, otherwise size of the pooling
region will be used.
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where
H_{out} = \left\lfloor\frac{H_{in} + 2 \times
\text{padding}[0] -
\text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor
W_{out} = \left\lfloor\frac{W_{in} + 2 \times
\text{padding}[1] -
\text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor
Examples:
>>> # pool of square window of size=3, stride=2
>>> m = nn.AvgPool2d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.AvgPool2d((3, 2), stride=(2, 1))
>>> input = torch.randn(20, 16, 50, 32)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html | pytorch docs |
torch.Tensor.fix
Tensor.fix() -> Tensor
See "torch.fix()". | https://pytorch.org/docs/stable/generated/torch.Tensor.fix.html | pytorch docs |
torch.nn.functional.feature_alpha_dropout
torch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False)
Randomly masks out entire channels (a channel is a feature map,
e.g. the j-th channel of the i-th sample in the batch input is a
tensor \text{input}[i, j]) of the input tensor). Instead of setting
activations to zero, as in regular Dropout, the activations are set
to the negative saturation value of the SELU activation function.
Each element will be masked independently on every forward call
with probability "p" using samples from a Bernoulli distribution.
The elements to be masked are randomized on every forward call, and
scaled and shifted to maintain zero mean and unit variance.
See "FeatureAlphaDropout" for details.
Parameters:
* p (float) -- dropout probability of a channel to be
zeroed. Default: 0.5
* **training** (*bool*) -- apply dropout if is "True". Default:
| https://pytorch.org/docs/stable/generated/torch.nn.functional.feature_alpha_dropout.html | pytorch docs |
"True"
* **inplace** (*bool*) -- If set to "True", will do this
operation in-place. Default: "False"
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.feature_alpha_dropout.html | pytorch docs |
torch.Tensor.to_sparse_csr
Tensor.to_sparse_csr(dense_dim=None) -> Tensor
Convert a tensor to compressed row storage format (CSR). Except
for strided tensors, only works with 2D tensors. If the "self" is
strided, then the number of dense dimensions could be specified,
and a hybrid CSR tensor will be created, with dense_dim dense
dimensions and self.dim() - 2 - dense_dim batch dimension.
Parameters:
dense_dim (int, optional) -- Number of dense
dimensions of the resulting CSR tensor. This argument should be
used only if "self" is a strided tensor, and must be a value
between 0 and dimension of "self" tensor minus two.
Example:
>>> dense = torch.randn(5, 5)
>>> sparse = dense.to_sparse_csr()
>>> sparse._nnz()
25
>>> dense = torch.zeros(3, 3, 1, 1)
>>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1
>>> dense.to_sparse_csr(dense_dim=2)
tensor(crow_indices=tensor([0, 1, 2, 3]),
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csr.html | pytorch docs |
tensor(crow_indices=tensor([0, 1, 2, 3]),
col_indices=tensor([0, 2, 1]),
values=tensor([[[1.]],
[[1.]],
[[1.]]]), size=(3, 3, 1, 1), nnz=3,
layout=torch.sparse_csr)
| https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csr.html | pytorch docs |
torch.nn.functional.glu
torch.nn.functional.glu(input, dim=- 1) -> Tensor
The gated linear unit. Computes:
\text{GLU}(a, b) = a \otimes \sigma(b)
where input is split in half along dim to form a and b,
\sigma is the sigmoid function and \otimes is the element-wise
product between matrices.
See Language Modeling with Gated Convolutional Networks.
Parameters:
* input (Tensor) -- input tensor
* **dim** (*int*) -- dimension on which to split the input.
Default: -1
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.glu.html | pytorch docs |
torch.nn.functional.conv1d
torch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor
Applies a 1D convolution over an input signal composed of several
input planes.
This operator supports TensorFloat32.
See "Conv1d" for details and output shape.
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Note:
This operator supports complex data types i.e. "complex32,
complex64, complex128".
Parameters:
* input -- input tensor of shape (\text{minibatch} ,
\text{in_channels} , iW)
* **weight** -- filters of shape (\text{out\_channels} ,
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html | pytorch docs |
\frac{\text{in_channels}}{\text{groups}} , kW)
* **bias** -- optional bias of shape (\text{out\_channels}).
Default: "None"
* **stride** -- the stride of the convolving kernel. Can be a
single number or a one-element tuple *(sW,)*. Default: 1
* **padding** --
implicit paddings on both sides of the input. Can be a string
{'valid', 'same'}, single number or a one-element tuple
*(padW,)*. Default: 0 "padding='valid'" is the same as no
padding. "padding='same'" pads the input so the output has the
same shape as the input. However, this mode doesn't support
any stride values other than 1.
Warning:
For "padding='same'", if the "weight" is even-length and
"dilation" is odd in any dimension, a full "pad()" operation
may be needed internally. Lowering performance.
* **dilation** -- the spacing between kernel elements. Can be a
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html | pytorch docs |
single number or a one-element tuple (dW,). Default: 1
* **groups** -- split input into groups, \text{in\_channels}
should be divisible by the number of groups. Default: 1
Examples:
>>> inputs = torch.randn(33, 16, 30)
>>> filters = torch.randn(20, 16, 5)
>>> F.conv1d(inputs, filters)
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html | pytorch docs |
default_weight_observer
torch.quantization.observer.default_weight_observer
alias of functools.partial(,
dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){} | https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_weight_observer.html | pytorch docs |
torch.gradient
torch.gradient(input, *, spacing=1, dim=None, edge_order=1) -> List of Tensors
Estimates the gradient of a function g : \mathbb{R}^n \rightarrow
\mathbb{R} in one or more dimensions using the second-order
accurate central differences method.
The gradient of g is estimated using samples. By default, when
"spacing" is not specified, the samples are entirely described by
"input", and the mapping of input coordinates to an output is the
same as the tensor's mapping of indices to values. For example, for
a three-dimensional "input" the function described is g :
\mathbb{R}^3 \rightarrow \mathbb{R}, and g(1, 2, 3)\ == input[1, 2,
3].
When "spacing" is specified, it modifies the relationship between
"input" and input coordinates. This is detailed in the "Keyword
Arguments" section below.
The gradient is estimated by estimating each partial derivative of
g independently. This estimation is accurate if g is in C^3 (it has | https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
at least 3 continuous derivatives), and the estimation can be
improved by providing closer samples. Mathematically, the value at
each interior point of a partial derivative is estimated using
Taylorâs theorem with remainder. Letting x be an interior point and
x+h_r be point neighboring it, the partial gradient at f(x+h_r) is
estimated using:
\begin{aligned} f(x+h_r) = f(x) + h_r f'(x) + {h_r}^2
\frac{f''(x)}{2} + {h_r}^3 \frac{f'''(x_r)}{6} \\ \end{aligned}
where x_r is a number in the interval [x, x+ h_r] and using the
fact that f \in C^3 we derive :
f'(x) \approx \frac{ {h_l}^2 f(x+h_r) - {h_r}^2 f(x-h_l) +
({h_r}^2-{h_l}^2 ) f(x) }{ {h_r} {h_l}^2 + {h_r}^2 {h_l} }
Note:
We estimate the gradient of functions in complex domain g :
\mathbb{C}^n \rightarrow \mathbb{C} in the same way.
The value of each partial derivative at the boundary points is
computed differently. See edge_order below.
Parameters: | https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
Parameters:
input ("Tensor") -- the tensor that represents the values of
the function
Keyword Arguments:
* spacing ("scalar", "list of scalar", "list of Tensor",
optional) -- "spacing" can be used to modify how the "input"
tensor's indices relate to sample coordinates. If "spacing" is
a scalar then the indices are multiplied by the scalar to
produce the coordinates. For example, if "spacing=2" the
indices (1, 2, 3) become coordinates (2, 4, 6). If "spacing"
is a list of scalars then the corresponding indices are
multiplied. For example, if "spacing=(2, -1, 3)" the indices
(1, 2, 3) become coordinates (2, -2, 9). Finally, if "spacing"
is a list of one-dimensional tensors then each tensor
specifies the coordinates for the corresponding dimension. For
example, if the indices are (1, 2, 3) and the tensors are (t0,
t1, t2), then the coordinates are (t0[1], t1[2], t2[3]) | https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
dim ("int", "list of int", optional) -- the dimension or
dimensions to approximate the gradient over. By default the
partial gradient in every dimension is computed. Note that
when "dim" is specified the elements of the "spacing"
argument must correspond with the specified dims."
edge_order ("int", optional) -- 1 or 2, for first-order or
second-order estimation of the boundary ("edge") values,
respectively.
Examples:
>>> # Estimates the gradient of f(x)=x^2 at points [-2, -1, 2, 4]
>>> coordinates = (torch.tensor([-2., -1., 1., 4.]),)
>>> values = torch.tensor([4., 1., 1., 16.], )
>>> torch.gradient(values, spacing = coordinates)
(tensor([-3., -2., 2., 5.]),)
>>> # Estimates the gradient of the R^2 -> R function whose samples are
>>> # described by the tensor t. Implicit coordinates are [0, 1] for the outermost
| https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
dimension and [0, 1, 2, 3] for the innermost dimension, and function estimates
>>> # partial derivative for both dimensions.
>>> t = torch.tensor([[1, 2, 4, 8], [10, 20, 40, 80]])
>>> torch.gradient(t)
(tensor([[ 9., 18., 36., 72.],
[ 9., 18., 36., 72.]]),
tensor([[ 1.0000, 1.5000, 3.0000, 4.0000],
[10.0000, 15.0000, 30.0000, 40.0000]]))
>>> # A scalar value for spacing modifies the relationship between tensor indices
>>> # and input coordinates by multiplying the indices to find the
>>> # coordinates. For example, below the indices of the innermost
>>> # 0, 1, 2, 3 translate to coordinates of [0, 2, 4, 6], and the indices of
>>> # the outermost dimension 0, 1 translate to coordinates of [0, 2].
>>> torch.gradient(t, spacing = 2.0) # dim = None (implicitly [0, 1])
(tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],
[ 4.5000, 9.0000, 18.0000, 36.0000]]),
| https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
tensor([[ 0.5000, 0.7500, 1.5000, 2.0000],
[ 5.0000, 7.5000, 15.0000, 20.0000]]))
>>> # doubling the spacing between samples halves the estimated partial gradients.
>>>
>>> # Estimates only the partial derivative for dimension 1
>>> torch.gradient(t, dim = 1) # spacing = None (implicitly 1.)
(tensor([[ 1.0000, 1.5000, 3.0000, 4.0000],
[10.0000, 15.0000, 30.0000, 40.0000]]),)
>>> # When spacing is a list of scalars, the relationship between the tensor
>>> # indices and input coordinates changes based on dimension.
>>> # For example, below, the indices of the innermost dimension 0, 1, 2, 3 translate
>>> # to coordinates of [0, 3, 6, 9], and the indices of the outermost dimension
>>> # 0, 1 translate to coordinates of [0, 2].
>>> torch.gradient(t, spacing = [3., 2.])
(tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],
[ 4.5000, 9.0000, 18.0000, 36.0000]]),
| https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
tensor([[ 0.3333, 0.5000, 1.0000, 1.3333],
[ 3.3333, 5.0000, 10.0000, 13.3333]]))
>>> # The following example is a replication of the previous one with explicit
>>> # coordinates.
>>> coords = (torch.tensor([0, 2]), torch.tensor([0, 3, 6, 9]))
>>> torch.gradient(t, spacing = coords)
(tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],
[ 4.5000, 9.0000, 18.0000, 36.0000]]),
tensor([[ 0.3333, 0.5000, 1.0000, 1.3333],
[ 3.3333, 5.0000, 10.0000, 13.3333]]))
| https://pytorch.org/docs/stable/generated/torch.gradient.html | pytorch docs |
torch.select_scatter
torch.select_scatter(input, src, dim, index) -> Tensor
Embeds the values of the "src" tensor into "input" at the given
index. This function returns a tensor with fresh storage; it does
not create a view.
Parameters:
* input (Tensor) -- the input tensor.
* **src** (*Tensor*) -- The tensor to embed into "input"
* **dim** (*int*) -- the dimension to insert the slice into.
* **index** (*int*) -- the index to select with
Note:
"src" must be of the proper size in order to be embedded into
"input". Specifically, it should have the same shape as
"torch.select(input, dim, index)"
Example:
>>> a = torch.zeros(2, 2)
>>> b = torch.ones(2)
>>> a.select_scatter(b, 0, 0)
tensor([[1., 1.],
[0., 0.]])
| https://pytorch.org/docs/stable/generated/torch.select_scatter.html | pytorch docs |
torch.Tensor.q_per_channel_zero_points
Tensor.q_per_channel_zero_points() -> Tensor
Given a Tensor quantized by linear (affine) per-channel
quantization, returns a tensor of zero_points of the underlying
quantizer. It has the number of elements that matches the
corresponding dimensions (from q_per_channel_axis) of the tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_zero_points.html | pytorch docs |
LSTM
class torch.nn.quantizable.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, device=None, dtype=None)
A quantizable long short-term memory (LSTM).
For the description and the argument types, please, refer to "LSTM"
Variables:
layers -- instances of the _LSTMLayer
Note:
To access the weights and biases, you need to access them per
layer. See examples below.
Examples:
>>> import torch.nn.quantizable as nnqa
>>> rnn = nnqa.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
>>> # To get the weights:
>>> print(rnn.layers[0].weight_ih)
tensor([[...]])
>>> print(rnn.layers[0].weight_hh)
AssertionError: There is no reverse path in the non-bidirectional layer
| https://pytorch.org/docs/stable/generated/torch.nn.quantizable.LSTM.html | pytorch docs |
torch.result_type
torch.result_type(tensor1, tensor2) -> dtype
Returns the "torch.dtype" that would result from performing an
arithmetic operation on the provided input tensors. See type
promotion documentation for more information on the type promotion
logic.
Parameters:
* tensor1 (Tensor or Number) -- an input tensor or
number
* **tensor2** (*Tensor** or **Number*) -- an input tensor or
number
Example:
>>> torch.result_type(torch.tensor([1, 2], dtype=torch.int), 1.0)
torch.float32
>>> torch.result_type(torch.tensor([1, 2], dtype=torch.uint8), torch.tensor(1))
torch.uint8
| https://pytorch.org/docs/stable/generated/torch.result_type.html | pytorch docs |
torch.foreach_atan
torch.foreach_atan(self: List[Tensor]) -> None
Apply "torch.atan()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_atan_.html | pytorch docs |
torch.cuda.nvtx.mark
torch.cuda.nvtx.mark(msg)
Describe an instantaneous event that occurred at some point.
Parameters:
msg (str) -- ASCII message to associate with the event. | https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.mark.html | pytorch docs |
torch.Tensor.cumsum
Tensor.cumsum(dim, dtype=None) -> Tensor
See "torch.cumsum()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cumsum.html | pytorch docs |
torch.resolve_neg
torch.resolve_neg(input) -> Tensor
Returns a new tensor with materialized negation if "input"'s
negative bit is set to True, else returns "input". The output
tensor will always have its negative bit set to False. :param
input: the input tensor. :type input: Tensor
Example:
>>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])
>>> y = x.conj()
>>> z = y.imag
>>> z.is_neg()
True
>>> out = y.resolve_neg()
>>> out
tensor([-1, -2, -3])
>>> out.is_neg()
False
| https://pytorch.org/docs/stable/generated/torch.resolve_neg.html | pytorch docs |
torch._foreach_atan
torch._foreach_atan(self: List[Tensor]) -> List[Tensor]
Apply "torch.atan()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_atan.html | pytorch docs |
torch.conj_physical
torch.conj_physical(input, *, out=None) -> Tensor
Computes the element-wise conjugate of the given "input" tensor. If
"input" has a non-complex dtype, this function just returns
"input".
Note:
This performs the conjugate operation regardless of the fact
conjugate bit is set or not.
Warning:
In the future, "torch.conj_physical()" may return a non-writeable
view for an "input" of non-complex dtype. It's recommended that
programs not modify the tensor returned by
"torch.conj_physical()" when "input" is of non-complex dtype to
be compatible with this change.
\text{out}_{i} = conj(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.conj_physical(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))
tensor([-1 - 1j, -2 - 2j, 3 + 3j])
| https://pytorch.org/docs/stable/generated/torch.conj_physical.html | pytorch docs |
torch.Tensor.new_zeros
Tensor.new_zeros(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor
Returns a Tensor of size "size" filled with "0". By default, the
returned Tensor has the same "torch.dtype" and "torch.device" as
this tensor.
Parameters:
size (int...) -- a list, tuple, or "torch.Size" of
integers defining the shape of the output tensor.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired type of
returned tensor. Default: if None, same "torch.dtype" as this
tensor.
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, same "torch.device" as this
tensor.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **layout** ("torch.layout", optional) -- the desired layout of
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_zeros.html | pytorch docs |
returned Tensor. Default: "torch.strided".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> tensor = torch.tensor((), dtype=torch.float64)
>>> tensor.new_zeros((2, 3))
tensor([[ 0., 0., 0.],
[ 0., 0., 0.]], dtype=torch.float64)
| https://pytorch.org/docs/stable/generated/torch.Tensor.new_zeros.html | pytorch docs |
propagate_qconfig
class torch.quantization.propagate_qconfig_(module, qconfig_dict=None, prepare_custom_config_dict=None)
Propagate qconfig through the module hierarchy and assign qconfig
attribute on each leaf module
Parameters:
* module -- input module
* **qconfig_dict** -- dictionary that maps from name or type of
submodule to quantization configuration, qconfig applies to
all submodules of a given module unless qconfig for the
submodules are specified (when the submodule already has
qconfig attribute)
* **prepare_custom_config_dict** -- dictionary for custom
handling of modules see docs for "prepare_fx()"
Returns:
None, module is modified inplace with qconfig attached | https://pytorch.org/docs/stable/generated/torch.quantization.propagate_qconfig_.html | pytorch docs |
torch.sign
torch.sign(input, *, out=None) -> Tensor
Returns a new tensor with the signs of the elements of "input".
\text{out}_{i} = \operatorname{sgn}(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([0.7, -1.2, 0., 2.3])
>>> a
tensor([ 0.7000, -1.2000, 0.0000, 2.3000])
>>> torch.sign(a)
tensor([ 1., -1., 0., 1.])
| https://pytorch.org/docs/stable/generated/torch.sign.html | pytorch docs |
torch.Tensor.ravel
Tensor.ravel() -> Tensor
see "torch.ravel()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ravel.html | pytorch docs |
torch.swapaxes
torch.swapaxes(input, axis0, axis1) -> Tensor
Alias for "torch.transpose()".
This function is equivalent to NumPy's swapaxes function.
Examples:
>>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.swapaxes(x, 0, 1)
tensor([[[0, 1],
[4, 5]],
[[2, 3],
[6, 7]]])
>>> torch.swapaxes(x, 0, 2)
tensor([[[0, 4],
[2, 6]],
[[1, 5],
[3, 7]]])
| https://pytorch.org/docs/stable/generated/torch.swapaxes.html | pytorch docs |
torch.nn.utils.remove_spectral_norm
torch.nn.utils.remove_spectral_norm(module, name='weight')
Removes the spectral normalization reparameterization from a
module.
Parameters:
* module (Module) -- containing module
* **name** (*str**, **optional*) -- name of weight parameter
Return type:
T_module
-[ Example ]-
m = spectral_norm(nn.Linear(40, 10))
remove_spectral_norm(m)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.remove_spectral_norm.html | pytorch docs |
torch.cuda.seed
torch.cuda.seed()
Sets the seed for generating random numbers to a random number for
the current GPU. It's safe to call this function if CUDA is not
available; in that case, it is silently ignored.
Warning:
If you are working with a multi-GPU model, this function will
only initialize the seed on one GPU. To initialize all GPUs, use
"seed_all()".
| https://pytorch.org/docs/stable/generated/torch.cuda.seed.html | pytorch docs |
torch.nn.functional.cross_entropy
torch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)
This criterion computes the cross entropy loss between input logits
and target.
See "CrossEntropyLoss" for details.
Parameters:
* input (Tensor) -- Predicted unnormalized logits; see
Shape section below for supported shapes.
* **target** (*Tensor*) -- Ground truth class indices or class
probabilities; see Shape section below for supported shapes.
* **weight** (*Tensor**, **optional*) -- a manual rescaling
weight given to each class. If given, has to be a Tensor of
size *C*
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
loss element in the batch. Note that for some losses, there
| https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html | pytorch docs |
multiple elements per sample. If the field "size_average" is
set to "False", the losses are instead summed for each
minibatch. Ignored when reduce is "False". Default: "True"
* **ignore_index** (*int**, **optional*) -- Specifies a target
value that is ignored and does not contribute to the input
gradient. When "size_average" is "True", the loss is averaged
over non-ignored targets. Note that "ignore_index" is only
applicable when the target contains class indices. Default:
-100
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
| https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html | pytorch docs |
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
* **label_smoothing** (*float**, **optional*) -- A float in
[0.0, 1.0]. Specifies the amount of smoothing when computing
the loss, where 0.0 means no smoothing. The targets become a
mixture of the original ground truth and a uniform
distribution as described in Rethinking the Inception
Architecture for Computer Vision. Default: 0.0.
Return type:
Tensor
Shape:
* Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K
\geq 1 in the case of K-dimensional loss. | https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html | pytorch docs |
Target: If containing class indices, shape (), (N) or (N, d_1,
d_2, ..., d_K) with K \geq 1 in the case of K-dimensional loss
where each value should be between [0, C). If containing class
probabilities, same shape as the input and each value should
be between [0, 1].
where:
\begin{aligned} C ={} & \text{number of classes} \\ N
={} & \text{batch size} \\ \end{aligned}
Examples:
>>> # Example of target with class indices
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randint(5, (3,), dtype=torch.int64)
>>> loss = F.cross_entropy(input, target)
>>> loss.backward()
>>>
>>> # Example of target with class probabilities
>>> input = torch.randn(3, 5, requires_grad=True)
>>> target = torch.randn(3, 5).softmax(dim=1)
>>> loss = F.cross_entropy(input, target)
>>> loss.backward()
| https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html | pytorch docs |
torch.arctanh
torch.arctanh(input, *, out=None) -> Tensor
Alias for "torch.atanh()". | https://pytorch.org/docs/stable/generated/torch.arctanh.html | pytorch docs |
torch.conj
torch.conj(input) -> Tensor
Returns a view of "input" with a flipped conjugate bit. If "input"
has a non-complex dtype, this function just returns "input".
Note:
"torch.conj()" performs a lazy conjugation, but the actual
conjugated tensor can be materialized at any time using
"torch.resolve_conj()".
Warning:
In the future, "torch.conj()" may return a non-writeable view for
an "input" of non-complex dtype. It's recommended that programs
not modify the tensor returned by "torch.conj_physical()" when
"input" is of non-complex dtype to be compatible with this
change.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])
>>> x.is_conj()
False
>>> y = torch.conj(x)
>>> y.is_conj()
True
| https://pytorch.org/docs/stable/generated/torch.conj.html | pytorch docs |
torch.Tensor.logical_and
Tensor.logical_and() -> Tensor
See "torch.logical_and()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_and.html | pytorch docs |
torch.Tensor.sinh
Tensor.sinh() -> Tensor
See "torch.sinh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sinh.html | pytorch docs |
DeQuantStub
class torch.quantization.DeQuantStub(qconfig=None)
Dequantize stub module, before calibration, this is same as
identity, this will be swapped as nnq.DeQuantize in convert.
Parameters:
qconfig -- quantization configuration for the tensor, if
qconfig is not provided, we will get qconfig from parent modules | https://pytorch.org/docs/stable/generated/torch.quantization.DeQuantStub.html | pytorch docs |
torch.nn.utils.remove_weight_norm
torch.nn.utils.remove_weight_norm(module, name='weight')
Removes the weight normalization reparameterization from a module.
Parameters:
* module (Module) -- containing module
* **name** (*str**, **optional*) -- name of weight parameter
Return type:
T_module
-[ Example ]-
m = weight_norm(nn.Linear(20, 40))
remove_weight_norm(m)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.remove_weight_norm.html | pytorch docs |
StepLR
class torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=- 1, verbose=False)
Decays the learning rate of each parameter group by gamma every
step_size epochs. Notice that such decay can happen simultaneously
with other changes to the learning rate from outside this
scheduler. When last_epoch=-1, sets initial lr as lr.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **step_size** (*int*) -- Period of learning rate decay.
* **gamma** (*float*) -- Multiplicative factor of learning rate
decay. Default: 0.1.
* **last_epoch** (*int*) -- The index of last epoch. Default:
-1.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
Assuming optimizer uses lr = 0.05 for all groups
lr = 0.05 if epoch < 30
lr = 0.005 if 30 <= epoch < 60
lr = 0.0005 if 60 <= epoch < 90
...
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html | pytorch docs |
...
scheduler = StepLR(optimizer, step_size=30, gamma=0.1)
for epoch in range(100):
train(...)
validate(...)
scheduler.step()
get_last_lr()
Return last computed learning rate by current scheduler.
load_state_dict(state_dict)
Loads the schedulers state.
Parameters:
**state_dict** (*dict*) -- scheduler state. Should be an
object returned from a call to "state_dict()".
print_lr(is_verbose, group, lr, epoch=None)
Display the current learning rate.
state_dict()
Returns the state of the scheduler as a "dict".
It contains an entry for every variable in self.__dict__ which
is not the optimizer.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html | pytorch docs |
PReLU
class torch.nn.PReLU(num_parameters=1, init=0.25, device=None, dtype=None)
Applies the element-wise function:
\text{PReLU}(x) = \max(0,x) + a * \min(0,x)
or
\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax,
& \text{ otherwise } \end{cases}
Here a is a learnable parameter. When called without arguments,
nn.PReLU() uses a single parameter a across all input channels.
If called with nn.PReLU(nChannels), a separate a is used for each
input channel.
Note:
weight decay should not be used when learning a for good
performance.
Note:
Channel dim is the 2nd dim of input. When input has dims < 2,
then there is no channel dim and the number of channels = 1.
Parameters:
* num_parameters (int) -- number of a to learn. Although
it takes an int as input, there is only two values are
legitimate: 1, or the number of channels at input. Default: 1 | https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html | pytorch docs |
init (float) -- the initial value of a. Default: 0.25
Shape:
* Input: ( *) where *** means, any number of additional
dimensions.
* Output: (*), same shape as the input.
Variables:
weight (Tensor) -- the learnable weights of shape
("num_parameters").
[image]
Examples:
>>> m = nn.PReLU()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html | pytorch docs |
torch.transpose
torch.transpose(input, dim0, dim1) -> Tensor
Returns a tensor that is a transposed version of "input". The given
dimensions "dim0" and "dim1" are swapped.
If "input" is a strided tensor then the resulting "out" tensor
shares its underlying storage with the "input" tensor, so changing
the content of one would change the content of the other.
If "input" is a sparse tensor then the resulting "out" tensor does
not share the underlying storage with the "input" tensor.
If "input" is a sparse tensor with compressed layout (SparseCSR,
SparseBSR, SparseCSC or SparseBSC) the arguments "dim0" and "dim1"
must be both batch dimensions, or must both be sparse dimensions.
The batch dimensions of a sparse tensor are the dimensions
preceding the sparse dimensions.
Note:
Transpositions which interchange the sparse dimensions of a
*SparseCSR* or *SparseCSC* layout tensor will result in the
| https://pytorch.org/docs/stable/generated/torch.transpose.html | pytorch docs |
layout changing between the two options. Transposition of the
sparse dimensions of a SparseBSR or SparseBSC layout tensor
will likewise generate a result with the opposite layout.
Parameters:
* input (Tensor) -- the input tensor.
* **dim0** (*int*) -- the first dimension to be transposed
* **dim1** (*int*) -- the second dimension to be transposed
Example:
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 1.0028, -0.9893, 0.5809],
[-0.1669, 0.7299, 0.4942]])
>>> torch.transpose(x, 0, 1)
tensor([[ 1.0028, -0.1669],
[-0.9893, 0.7299],
[ 0.5809, 0.4942]])
See also "torch.t()". | https://pytorch.org/docs/stable/generated/torch.transpose.html | pytorch docs |
torch.cdist
torch.cdist(x1, x2, p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary')
Computes batched the p-norm distance between each pair of the two
collections of row vectors.
Parameters:
* x1 (Tensor) -- input tensor of shape B \times P \times
M.
* **x2** (*Tensor*) -- input tensor of shape B \times R \times
M.
* **p** (*float*) -- p value for the p-norm distance to
calculate between each vector pair \in [0, \infty].
* **compute_mode** (*str*) --
'use_mm_for_euclid_dist_if_necessary' - will use matrix
multiplication approach to calculate euclidean distance (p =
2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always
use matrix multiplication approach to calculate euclidean
distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never
use matrix multiplication approach to calculate euclidean
distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.
| https://pytorch.org/docs/stable/generated/torch.cdist.html | pytorch docs |
Return type:
Tensor
If x1 has shape B \times P \times M and x2 has shape B \times R
\times M then the output will have shape B \times P \times R.
This function is equivalent to
scipy.spatial.distance.cdist(input,'minkowski', p=p) if p \in (0,
\infty). When p = 0 it is equivalent to
scipy.spatial.distance.cdist(input, 'hamming') * M. When p =
\infty, the closest scipy function is
scipy.spatial.distance.cdist(xn, lambda x, y: np.abs(x -
y).max()).
-[ Example ]-
a = torch.tensor([[0.9041, 0.0196], [-0.3108, -2.4423], [-0.4821, 1.059]])
a
tensor([[ 0.9041, 0.0196],
[-0.3108, -2.4423],
[-0.4821, 1.0590]])
b = torch.tensor([[-2.1763, -0.4713], [-0.6986, 1.3702]])
b
tensor([[-2.1763, -0.4713],
[-0.6986, 1.3702]])
torch.cdist(a, b, p=2)
tensor([[3.1193, 2.0959],
[2.7138, 3.8322],
[2.2830, 0.3791]])
| https://pytorch.org/docs/stable/generated/torch.cdist.html | pytorch docs |
torch.split
torch.split(tensor, split_size_or_sections, dim=0)
Splits the tensor into chunks. Each chunk is a view of the original
tensor.
If "split_size_or_sections" is an integer type, then "tensor" will
be split into equally sized chunks (if possible). Last chunk will
be smaller if the tensor size along the given dimension "dim" is
not divisible by "split_size".
If "split_size_or_sections" is a list, then "tensor" will be split
into "len(split_size_or_sections)" chunks with sizes in "dim"
according to "split_size_or_sections".
Parameters:
* tensor (Tensor) -- tensor to split.
* **split_size_or_sections** (*int**) or **(**list**(**int**)*)
-- size of a single chunk or list of sizes for each chunk
* **dim** (*int*) -- dimension along which to split the tensor.
Return type:
List[Tensor]
Example:
>>> a = torch.arange(10).reshape(5, 2)
>>> a
tensor([[0, 1],
[2, 3],
| https://pytorch.org/docs/stable/generated/torch.split.html | pytorch docs |
tensor([[0, 1],
[2, 3],
[4, 5],
[6, 7],
[8, 9]])
>>> torch.split(a, 2)
(tensor([[0, 1],
[2, 3]]),
tensor([[4, 5],
[6, 7]]),
tensor([[8, 9]]))
>>> torch.split(a, [1, 4])
(tensor([[0, 1]]),
tensor([[2, 3],
[4, 5],
[6, 7],
[8, 9]])) | https://pytorch.org/docs/stable/generated/torch.split.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.