text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.Tensor.sinh_
Tensor.sinh_() -> Tensor
In-place version of "sinh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sinh_.html | pytorch docs |
ConvReLU1d
class torch.ao.nn.intrinsic.ConvReLU1d(conv, relu)
This is a sequential container which calls the Conv1d and ReLU
modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU1d.html | pytorch docs |
torch.cuda.jiterator._create_jit_fn
torch.cuda.jiterator._create_jit_fn(code_string, **kwargs)
Create a jiterator-generated cuda kernel for an elementwise op.
The code string has to be a valid CUDA function that describes the
computation for a single element. The code string has to follow the
c++ template pattern, as shown in the example below. This function
will be inlined into elementwise kernel template, and compiled on
the fly. Compiled kernel will be cached in memory, as well as local
temp dir.
Jiterator-generated kernels accepts noncontiguous tensors, and
supports boardcasting and type promotion.
Parameters:
* code_string (str) -- CUDA code string to be compiled by
jiterator. The entry functor must return by value.
* **kwargs** (*Dict**, **optional*) -- Keyword arguments for
generated function
Return type:
Callable
Example: | https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html | pytorch docs |
Return type:
Callable
Example:
code_string = "template <typename T> T my_kernel(T x, T y, T alpha) { return -x + alpha * y; }"
jitted_fn = create_jit_fn(code_string, alpha=1.0)
a = torch.rand(3, device='cuda')
b = torch.rand(3, device='cuda')
# invoke jitted function like a regular python function
result = jitted_fn(a, b, alpha=3.14)
code_string also allows multiple function definitions, and the last
function will be treated as the entry function.
Example:
code_string = "template <typename T> T util_fn(T x, T y) { return ::sin(x) + ::cos(y); }"
code_string += "template <typename T> T my_kernel(T x, T y, T val) { return ::min(val, util_fn(x, y)); }"
jitted_fn = create_jit_fn(code_string, val=0.0)
a = torch.rand(3, device='cuda')
b = torch.rand(3, device='cuda')
# invoke jitted function like a regular python function
result = jitted_fn(a, b) # using default val=0.0
| https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html | pytorch docs |
Jiterator can be used together with python registration to override
an operator's cuda kernel. Following example is overriding gelu's
cuda kernel with relu.
Example:
code_string = "template <typename T> T my_gelu(T a) { return a > 0 ? a : 0; }"
my_gelu = create_jit_fn(code_string)
my_lib = torch.library.Library("aten", "IMPL")
my_lib.impl('aten::gelu', my_gelu, "CUDA")
# torch.nn.GELU and torch.nn.function.gelu are now overridden
a = torch.rand(3, device='cuda')
torch.allclose(torch.nn.functional.gelu(a), torch.nn.functional.relu(a))
Warning:
This API is in beta and may change in future releases.
Warning:
This API only supports up to 8 inputs and 1 output
Warning:
All input tensors must live in CUDA device
| https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html | pytorch docs |
default_fake_quant
torch.quantization.fake_quantize.default_fake_quant
alias of functools.partial(,
observer=,
quant_min=0, quant_max=255, dtype=torch.quint8,
qscheme=torch.per_tensor_affine, reduce_range=True){} | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fake_quant.html | pytorch docs |
torch.logaddexp
torch.logaddexp(input, other, *, out=None) -> Tensor
Logarithm of the sum of exponentiations of the inputs.
Calculates pointwise \log\left(e^x + e^y\right). This function is
useful in statistics where the calculated probabilities of events
may be so small as to exceed the range of normal floating point
numbers. In such cases the logarithm of the calculated probability
is stored. This function allows adding probabilities stored in such
a fashion.
This op should be disambiguated with "torch.logsumexp()" which
performs a reduction on a single tensor.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.logaddexp(torch.tensor([-1.0]), torch.tensor([-1.0, -2, -3]))
tensor([-0.3069, -0.6867, -0.8731])
| https://pytorch.org/docs/stable/generated/torch.logaddexp.html | pytorch docs |
tensor([-0.3069, -0.6867, -0.8731])
>>> torch.logaddexp(torch.tensor([-100.0, -200, -300]), torch.tensor([-1.0, -2, -3]))
tensor([-1., -2., -3.])
>>> torch.logaddexp(torch.tensor([1.0, 2000, 30000]), torch.tensor([-1.0, -2, -3]))
tensor([1.1269e+00, 2.0000e+03, 3.0000e+04]) | https://pytorch.org/docs/stable/generated/torch.logaddexp.html | pytorch docs |
torch.nn.utils.prune.random_structured
torch.nn.utils.prune.random_structured(module, name, amount, dim)
Prunes tensor corresponding to parameter called "name" in "module"
by removing the specified "amount" of (currently unpruned) channels
along the specified "dim" selected at random. Modifies module in
place (and also return the modified module) by:
adding a named buffer called "name+'_mask'" corresponding to the
binary mask applied to the parameter "name" by the pruning
method.
replacing the parameter "name" by its pruned version, while the
original (unpruned) parameter is stored in a new parameter named
"name+'_orig'".
Parameters:
* module (nn.Module) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **amount** (*int** or **float*) -- quantity of parameters to
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_structured.html | pytorch docs |
prune. If "float", should be between 0.0 and 1.0 and represent
the fraction of parameters to prune. If "int", it represents
the absolute number of parameters to prune.
* **dim** (*int*) -- index of the dim along which we define
channels to prune.
Returns:
modified (i.e. pruned) version of the input module
Return type:
module (nn.Module)
-[ Examples ]-
m = prune.random_structured(
... nn.Linear(5, 3), 'weight', amount=3, dim=1
... )
columns_pruned = int(sum(torch.sum(m.weight, dim=0) == 0))
print(columns_pruned)
3
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_structured.html | pytorch docs |
torch.argmax
torch.argmax(input) -> LongTensor
Returns the indices of the maximum value of all elements in the
"input" tensor.
This is the second value returned by "torch.max()". See its
documentation for the exact semantics of this method.
Note:
If there are multiple maximal values then the indices of the
first maximal value are returned.
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],
[-0.7401, -0.8805, -0.3402, -1.1936],
[ 0.4907, -1.3948, -1.0691, -0.3132],
[-1.6092, 0.5419, -0.2993, 0.3195]])
>>> torch.argmax(a)
tensor(0)
torch.argmax(input, dim, keepdim=False) -> LongTensor
Returns the indices of the maximum values of a tensor across a
dimension.
This is the second value returned by "torch.max()". See its
documentation for the exact semantics of this method. | https://pytorch.org/docs/stable/generated/torch.argmax.html | pytorch docs |
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to reduce. If "None", the
argmax of the flattened input is returned.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not. Ignored if "dim=None".
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],
[-0.7401, -0.8805, -0.3402, -1.1936],
[ 0.4907, -1.3948, -1.0691, -0.3132],
[-1.6092, 0.5419, -0.2993, 0.3195]])
>>> torch.argmax(a, dim=1)
tensor([ 0, 2, 0, 1])
| https://pytorch.org/docs/stable/generated/torch.argmax.html | pytorch docs |
QuantWrapper
class torch.quantization.QuantWrapper(module)
A wrapper class that wraps the input module, adds QuantStub and
DeQuantStub and surround the call to module with call to quant and
dequant modules.
This is used by the quantization utility functions to add the
quant and dequant modules, before convert function QuantStub
will just be observer, it observes the input tensor, after
convert, QuantStub will be swapped to nnq.Quantize which does
actual quantization. Similarly for DeQuantStub. | https://pytorch.org/docs/stable/generated/torch.quantization.QuantWrapper.html | pytorch docs |
torch.Tensor.dsplit
Tensor.dsplit(split_size_or_sections) -> List of Tensors
See "torch.dsplit()" | https://pytorch.org/docs/stable/generated/torch.Tensor.dsplit.html | pytorch docs |
torch.Tensor.gt_
Tensor.gt_(other) -> Tensor
In-place version of "gt()". | https://pytorch.org/docs/stable/generated/torch.Tensor.gt_.html | pytorch docs |
torch.Tensor.sign
Tensor.sign() -> Tensor
See "torch.sign()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sign.html | pytorch docs |
AdaptiveAvgPool3d
class torch.nn.AdaptiveAvgPool3d(output_size)
Applies a 3D adaptive average pooling over an input signal composed
of several input planes.
The output is of size D x H x W, for any input size. The number of
output features is equal to the number of input planes.
Parameters:
output_size (Union[int, None,
Tuple[Optional[int], Optional[int],
Optional[int]]]) -- the target output size of the
form D x H x W. Can be a tuple (D, H, W) or a single number D
for a cube D x D x D. D, H and W can be either a "int", or
"None" which means the size will be the same as that of the
input.
Shape:
* Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},
W_{in}).
* Output: (N, C, S_{0}, S_{1}, S_{2}) or (C, S_{0}, S_{1},
S_{2}), where S=\text{output\_size}.
-[ Examples ]-
target output size of 5x7x9
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool3d.html | pytorch docs |
target output size of 5x7x9
m = nn.AdaptiveAvgPool3d((5, 7, 9))
input = torch.randn(1, 64, 8, 9, 10)
output = m(input)
target output size of 7x7x7 (cube)
m = nn.AdaptiveAvgPool3d(7)
input = torch.randn(1, 64, 10, 9, 8)
output = m(input)
target output size of 7x9x8
m = nn.AdaptiveAvgPool3d((7, None, None))
input = torch.randn(1, 64, 10, 9, 8)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool3d.html | pytorch docs |
torch.bitwise_right_shift
torch.bitwise_right_shift(input, other, *, out=None) -> Tensor
Computes the right arithmetic shift of "input" by "other" bits. The
input tensor must be of integral type. This operator supports
broadcasting to a common shape and type promotion.
The operation applied is:
\text{out}_i = \text{input}_i >> \text{other}_i
Parameters:
* input (Tensor or Scalar) -- the first input tensor
* **other** (*Tensor** or **Scalar*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.bitwise_right_shift(torch.tensor([-2, -7, 31], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))
tensor([-1, -7, 3], dtype=torch.int8)
| https://pytorch.org/docs/stable/generated/torch.bitwise_right_shift.html | pytorch docs |
torch.Tensor.normal_
Tensor.normal_(mean=0, std=1, *, generator=None) -> Tensor
Fills "self" tensor with elements samples from the normal
distribution parameterized by "mean" and "std". | https://pytorch.org/docs/stable/generated/torch.Tensor.normal_.html | pytorch docs |
hardsigmoid
class torch.ao.nn.quantized.functional.hardsigmoid(input, inplace=False)
This is the quantized version of "hardsigmoid()".
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardsigmoid.html | pytorch docs |
celu
class torch.ao.nn.quantized.functional.celu(input, scale, zero_point, alpha=1.)
Applies the quantized CELU function element-wise.
\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x / \alpha)
- 1))
Parameters:
* input (Tensor) -- quantized input
* **alpha** (*float*) -- the \alpha value for the CELU
formulation. Default: 1.0
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.celu.html | pytorch docs |
torch.nn.functional.huber_loss
torch.nn.functional.huber_loss(input, target, reduction='mean', delta=1.0)
Function that uses a squared term if the absolute element-wise
error falls below delta and a delta-scaled L1 term otherwise.
See "HuberLoss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.huber_loss.html | pytorch docs |
torch.linalg.lu
torch.linalg.lu(A, *, pivot=True, out=None)
Computes the LU decomposition with partial pivoting of a matrix.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the LU
decomposition with partial pivoting of a matrix A \in
\mathbb{K}^{m \times n} is defined as
A = PLU\mathrlap{\qquad P \in \mathbb{K}^{m \times m}, L \in
\mathbb{K}^{m \times k}, U \in \mathbb{K}^{k \times n}}
where k = min(m,n), P is a permutation matrix, L is lower
triangular with ones on the diagonal and U is upper triangular.
If "pivot"= False and "A" is on GPU, then the LU decomposition
without pivoting is computed
A = LU\mathrlap{\qquad L \in \mathbb{K}^{m \times k}, U \in
\mathbb{K}^{k \times n}}
When "pivot"= False, the returned matrix "P" will be empty. The
LU decomposition without pivoting may not exist if any of the
principal minors of "A" is singular. In this case, the output
matrix may contain inf or NaN. | https://pytorch.org/docs/stable/generated/torch.linalg.lu.html | pytorch docs |
matrix may contain inf or NaN.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
See also:
"torch.linalg.solve()" solves a system of linear equations using
the LU decomposition with partial pivoting.
Warning:
The LU decomposition is almost never unique, as often there are
different permutation matrices that can yield different LU
decompositions. As such, different platforms, like SciPy, or
inputs on different devices, may produce different valid
decompositions.
Warning:
Gradient computations are only supported if the input matrix is
full-rank. If this condition is not met, no error will be thrown,
but the gradient may not be finite. This is because the LU
decomposition with pivoting is not differentiable at these
points.
Parameters: | https://pytorch.org/docs/stable/generated/torch.linalg.lu.html | pytorch docs |
points.
Parameters:
* A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
* **pivot** (*bool**, **optional*) -- Controls whether to
compute the LU decomposition with partial pivoting or no
pivoting. Default: *True*.
Keyword Arguments:
out (tuple, optional) -- output tuple of three
tensors. Ignored if None. Default: None.
Returns:
A named tuple (P, L, U).
Examples:
>>> A = torch.randn(3, 2)
>>> P, L, U = torch.linalg.lu(A)
>>> P
tensor([[0., 1., 0.],
[0., 0., 1.],
[1., 0., 0.]])
>>> L
tensor([[1.0000, 0.0000],
[0.5007, 1.0000],
[0.0633, 0.9755]])
>>> U
tensor([[0.3771, 0.0489],
[0.0000, 0.9644]])
>>> torch.dist(A, P @ L @ U)
tensor(5.9605e-08)
>>> A = torch.randn(2, 5, 7, device="cuda")
>>> P, L, U = torch.linalg.lu(A, pivot=False)
| https://pytorch.org/docs/stable/generated/torch.linalg.lu.html | pytorch docs |
P
tensor([], device='cuda:0')
>>> torch.dist(A, L @ U)
tensor(1.0376e-06, device='cuda:0')
| https://pytorch.org/docs/stable/generated/torch.linalg.lu.html | pytorch docs |
torch.Tensor.data_ptr
Tensor.data_ptr() -> int
Returns the address of the first element of "self" tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.data_ptr.html | pytorch docs |
quantize
class torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False)
Quantize the input float model with post training static
quantization.
First it will prepare the model for calibration, then it calls
run_fn which will run the calibration step, after that we will
convert the model to a quantized model.
Parameters:
* model -- input float model
* **run_fn** -- a calibration function for calibrating the
prepared model
* **run_args** -- positional arguments for *run_fn*
* **inplace** -- carry out model transformations in-place, the
original module is mutated
* **mapping** -- correspondence between original module types
and quantized counterparts
Returns:
Quantized model. | https://pytorch.org/docs/stable/generated/torch.quantization.quantize.html | pytorch docs |
torch.log1p
torch.log1p(input, *, out=None) -> Tensor
Returns a new tensor with the natural logarithm of (1 + "input").
y_i = \log_{e} (x_i + 1)
Note:
This function is more accurate than "torch.log()" for small
values of "input"
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(5)
>>> a
tensor([-1.0090, -0.9923, 1.0249, -0.5372, 0.2492])
>>> torch.log1p(a)
tensor([ nan, -4.8653, 0.7055, -0.7705, 0.2225])
| https://pytorch.org/docs/stable/generated/torch.log1p.html | pytorch docs |
torch.diagflat
torch.diagflat(input, offset=0) -> Tensor
If "input" is a vector (1-D tensor), then returns a 2-D square
tensor with the elements of "input" as the diagonal.
If "input" is a tensor with more than one dimension, then returns
a 2-D tensor with diagonal elements equal to a flattened "input".
The argument "offset" controls which diagonal to consider:
If "offset" = 0, it is the main diagonal.
If "offset" > 0, it is above the main diagonal.
If "offset" < 0, it is below the main diagonal.
Parameters:
* input (Tensor) -- the input tensor.
* **offset** (*int**, **optional*) -- the diagonal to consider.
Default: 0 (main diagonal).
Examples:
>>> a = torch.randn(3)
>>> a
tensor([-0.2956, -0.9068, 0.1695])
>>> torch.diagflat(a)
tensor([[-0.2956, 0.0000, 0.0000],
[ 0.0000, -0.9068, 0.0000],
[ 0.0000, 0.0000, 0.1695]])
>>> torch.diagflat(a, 1)
| https://pytorch.org/docs/stable/generated/torch.diagflat.html | pytorch docs |
torch.diagflat(a, 1)
tensor([[ 0.0000, -0.2956, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.9068, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.1695],
[ 0.0000, 0.0000, 0.0000, 0.0000]])
>>> a = torch.randn(2, 2)
>>> a
tensor([[ 0.2094, -0.3018],
[-0.1516, 1.9342]])
>>> torch.diagflat(a)
tensor([[ 0.2094, 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.3018, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.1516, 0.0000],
[ 0.0000, 0.0000, 0.0000, 1.9342]])
| https://pytorch.org/docs/stable/generated/torch.diagflat.html | pytorch docs |
torch.Tensor.masked_scatter_
Tensor.masked_scatter_(mask, source)
Copies elements from "source" into "self" tensor at positions where
the "mask" is True. The shape of "mask" must be broadcastable with
the shape of the underlying tensor. The "source" should have at
least as many elements as the number of ones in "mask"
Parameters:
* mask (BoolTensor) -- the boolean mask
* **source** (*Tensor*) -- the tensor to copy from
Note:
The "mask" operates on the "self" tensor, not on the given
"source" tensor.
| https://pytorch.org/docs/stable/generated/torch.Tensor.masked_scatter_.html | pytorch docs |
dual_level
class torch.autograd.forward_ad.dual_level
Context-manager that enables forward AD. All forward AD computation
must be performed in a "dual_level" context.
Note:
The "dual_level" context appropriately enters and exit the dual
level to controls the current forward AD level, which is used by
default by the other functions in this API.We currently don't
plan to support nested "dual_level" contexts, however, so only a
single forward AD level is supported. To compute higher-order
forward grads, one can use "torch.func.jvp()".
Example:
>>> x = torch.tensor([1])
>>> x_t = torch.tensor([1])
>>> with dual_level():
... inp = make_dual(x, x_t)
... # Do computations with inp
... out = your_fn(inp)
... _, grad = unpack_dual(out)
>>> grad is None
False
>>> # After exiting the level, the grad is deleted
>>> _, grad_after = unpack_dual(out)
>>> grad is None
True
| https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.dual_level.html | pytorch docs |
grad is None
True
Please see the forward-mode AD tutorial for detailed steps on how
to use this API. | https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.dual_level.html | pytorch docs |
torch.Tensor.share_memory_
Tensor.share_memory_()
Moves the underlying storage to shared memory.
This is a no-op if the underlying storage is already in shared
memory and for CUDA tensors. Tensors in shared memory cannot be
resized. | https://pytorch.org/docs/stable/generated/torch.Tensor.share_memory_.html | pytorch docs |
LBFGS
class torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None)
Implements L-BFGS algorithm, heavily inspired by minFunc.
Warning:
This optimizer doesn't support per-parameter options and
parameter groups (there can be only one).
Warning:
Right now all parameters have to be on a single device. This will
be improved in the future.
Note:
This is a very memory intensive optimizer (it requires additional
"param_bytes * (history_size + 1)" bytes). If it doesn't fit in
memory try reducing the history size, or use a different
algorithm.
Parameters:
* lr (float) -- learning rate (default: 1)
* **max_iter** (*int*) -- maximal number of iterations per
optimization step (default: 20)
* **max_eval** (*int*) -- maximal number of function evaluations
per optimization step (default: max_iter * 1.25).
| https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html | pytorch docs |
tolerance_grad (float) -- termination tolerance on first
order optimality (default: 1e-5).
tolerance_change (float) -- termination tolerance on
function value/parameter changes (default: 1e-9).
history_size (int) -- update history size (default:
100).
line_search_fn (str) -- either 'strong_wolfe' or None
(default: None).
add_param_group(param_group)
Add a param group to the "Optimizer" s *param_groups*.
This can be useful when fine tuning a pre-trained network as
frozen layers can be made trainable and added to the "Optimizer"
as training progresses.
Parameters:
**param_group** (*dict*) -- Specifies what Tensors should be
optimized along with group specific optimization options.
load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
**state_dict** (*dict*) -- optimizer state. Should be an
| https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html | pytorch docs |
object returned from a call to "state_dict()".
register_step_post_hook(hook)
Register an optimizer step post hook which will be called after
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The "optimizer" argument is the optimizer instance being used.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
register_step_pre_hook(hook)
Register an optimizer step pre hook which will be called before
optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The "optimizer" argument is the optimizer instance being used.
If args and kwargs are modified by the pre-hook, then the
| https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html | pytorch docs |
transformed values are returned as a tuple containing the
new_args and new_kwargs.
Parameters:
**hook** (*Callable*) -- The user defined hook to be
registered.
Returns:
a handle that can be used to remove the added hook by calling
"handle.remove()"
Return type:
"torch.utils.hooks.RemoveableHandle"
state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
* state - a dict holding current optimization state. Its content
differs between optimizer classes.
* param_groups - a list containing all parameter groups where
each
parameter group is a dict
step(closure)
Performs a single optimization step.
Parameters:
**closure** (*Callable*) -- A closure that reevaluates the
model and returns the loss.
zero_grad(set_to_none=False)
Sets the gradients of all optimized "torch.Tensor" s to zero.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html | pytorch docs |
Parameters:
set_to_none (bool) -- instead of setting to zero, set
the grads to None. This will in general have lower memory
footprint, and can modestly improve performance. However, it
changes certain behaviors. For example: 1. When the user
tries to access a gradient and perform manual ops on it, a
None attribute or a Tensor full of 0s will behave
differently. 2. If the user requests
"zero_grad(set_to_none=True)" followed by a backward pass,
".grad"s are guaranteed to be None for params that did not
receive a gradient. 3. "torch.optim" optimizers have a
different behavior if the gradient is 0 or None (in one case
it does the step with a gradient of 0 and in the other it
skips the step altogether). | https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html | pytorch docs |
torch.Tensor.addr
Tensor.addr(vec1, vec2, *, beta=1, alpha=1) -> Tensor
See "torch.addr()" | https://pytorch.org/docs/stable/generated/torch.Tensor.addr.html | pytorch docs |
torch.Tensor.type
Tensor.type(dtype=None, non_blocking=False, **kwargs) -> str or Tensor
Returns the type if dtype is not provided, else casts this object
to the specified type.
If this is already of the correct type, no copy is performed and
the original object is returned.
Parameters:
* dtype (dtype or string) -- The desired type
* **non_blocking** (*bool*) -- If "True", and the source is in
pinned memory and destination is on the GPU or vice versa, the
copy is performed asynchronously with respect to the host.
Otherwise, the argument has no effect.
* ****kwargs** -- For compatibility, may contain the key "async"
in place of the "non_blocking" argument. The "async" arg is
deprecated.
| https://pytorch.org/docs/stable/generated/torch.Tensor.type.html | pytorch docs |
torch.Tensor.narrow_copy
Tensor.narrow_copy(dimension, start, length) -> Tensor
See "torch.narrow_copy()". | https://pytorch.org/docs/stable/generated/torch.Tensor.narrow_copy.html | pytorch docs |
LazyInstanceNorm3d
class torch.nn.LazyInstanceNorm3d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
A "torch.nn.InstanceNorm3d" module with lazy initialization of the
"num_features" argument of the "InstanceNorm3d" that is inferred
from the "input.size(1)". The attributes that will be lazily
initialized are weight, bias, running_mean and running_var.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* num_features -- C from an expected input of size (N, C, D,
H, W) or (C, D, H, W)
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
| https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm3d.html | pytorch docs |
"True", this module has learnable affine parameters,
initialized the same way as done for batch normalization.
Default: "False".
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics and always uses batch statistics in both
training and eval modes. Default: "False"
Shape:
* Input: (N, C, D, H, W) or (C, D, H, W)
* Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input)
cls_to_become
alias of "InstanceNorm3d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm3d.html | pytorch docs |
ConstantPad2d
class torch.nn.ConstantPad2d(padding, value)
Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in all boundaries. If a 4-tuple,
uses (\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom})
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where
H_{out} = H_{in} + \text{padding\_top} +
\text{padding\_bottom}
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ConstantPad2d(2, 3.5)
>>> input = torch.randn(1, 2, 2)
>>> input
tensor([[[ 1.6585, 0.4320],
[-0.8701, -0.4649]]])
>>> m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad2d.html | pytorch docs |
m(input)
tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000],
[ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])
>>> # using different paddings for different sides
>>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5)
>>> m(input)
tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],
[ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320],
[ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649],
[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])
| https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad2d.html | pytorch docs |
torch.Tensor.polygamma_
Tensor.polygamma_(n) -> Tensor
In-place version of "polygamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.polygamma_.html | pytorch docs |
GRUCell
class torch.nn.GRUCell(input_size, hidden_size, bias=True, device=None, dtype=None)
A gated recurrent unit (GRU) cell
\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h +
b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\
n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' =
(1 - z) * n + z * h \end{array}
where \sigma is the sigmoid function, and * is the Hadamard
product.
Parameters:
* input_size (int) -- The number of expected features in
the input x
* **hidden_size** (*int*) -- The number of features in the
hidden state *h*
* **bias** (*bool*) -- If "False", then the layer does not use
bias weights *b_ih* and *b_hh*. Default: "True"
Inputs: input, hidden
* input : tensor containing input features
* **hidden** : tensor containing the initial hidden state for
each element in the batch. Defaults to zero if not provided.
Outputs: h' | https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html | pytorch docs |
Outputs: h'
* h' : tensor containing the next hidden state for each
element in the batch
Shape:
* input: (N, H_{in}) or (H_{in}) tensor containing input
features where H_{in} = input_size.
* hidden: (N, H_{out}) or (H_{out}) tensor containing the
initial hidden state where H_{out} = *hidden_size*. Defaults
to zero if not provided.
* output: (N, H_{out}) or (H_{out}) tensor containing the next
hidden state.
Variables:
* weight_ih (torch.Tensor) -- the learnable input-hidden
weights, of shape (3hidden_size, input_size)*
* **weight_hh** (*torch.Tensor*) -- the learnable hidden-hidden
weights, of shape *(3*hidden_size, hidden_size)*
* **bias_ih** -- the learnable input-hidden bias, of shape
*(3*hidden_size)*
* **bias_hh** -- the learnable hidden-hidden bias, of shape
*(3*hidden_size)*
Note:
All the weights and biases are initialized from
| https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html | pytorch docs |
\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{hidden_size}}
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Examples:
>>> rnn = nn.GRUCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
... hx = rnn(input[i], hx)
... output.append(hx)
| https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html | pytorch docs |
torch.erfinv
torch.erfinv(input, *, out=None) -> Tensor
Alias for "torch.special.erfinv()". | https://pytorch.org/docs/stable/generated/torch.erfinv.html | pytorch docs |
torch.Tensor.asin_
Tensor.asin_() -> Tensor
In-place version of "asin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.asin_.html | pytorch docs |
torch.Tensor.smm
Tensor.smm(mat) -> Tensor
See "torch.smm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.smm.html | pytorch docs |
torch.fft.ifftshift
torch.fft.ifftshift(input, dim=None) -> Tensor
Inverse of "fftshift()".
Parameters:
* input (Tensor) -- the tensor in FFT order
* **dim** (*int**, **Tuple**[**int**]**, **optional*) -- The
dimensions to rearrange. Only dimensions specified here will
be rearranged, any other dimensions will be left in their
original order. Default: All dimensions of "input".
-[ Example ]-
f = torch.fft.fftfreq(5)
f
tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])
A round-trip through "fftshift()" and "ifftshift()" gives the same
result:
shifted = torch.fft.fftshift(f)
torch.fft.ifftshift(shifted)
tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])
| https://pytorch.org/docs/stable/generated/torch.fft.ifftshift.html | pytorch docs |
torch.Tensor.repeat
Tensor.repeat(*sizes) -> Tensor
Repeats this tensor along the specified dimensions.
Unlike "expand()", this function copies the tensor's data.
Warning:
"repeat()" behaves differently from numpy.repeat, but is more
similar to numpy.tile. For the operator similar to
*numpy.repeat*, see "torch.repeat_interleave()".
Parameters:
sizes (torch.Size or int...) -- The number of times
to repeat this tensor along each dimension
Example:
>>> x = torch.tensor([1, 2, 3])
>>> x.repeat(4, 2)
tensor([[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3],
[ 1, 2, 3, 1, 2, 3]])
>>> x.repeat(4, 2, 1).size()
torch.Size([4, 2, 3])
| https://pytorch.org/docs/stable/generated/torch.Tensor.repeat.html | pytorch docs |
torch.func.jacrev
torch.func.jacrev(func, argnums=0, *, has_aux=False, chunk_size=None, _preallocate_and_copy=False)
Computes the Jacobian of "func" with respect to the arg(s) at index
"argnum" using reverse mode autodiff
Note:
Using "chunk_size=1" is equivalent to computing the jacobian row-
by-row with a for-loop i.e. the constraints of "vmap()" are not
applicable.
Parameters:
* func (function) -- A Python function that takes one or
more arguments, one of which must be a Tensor, and returns one
or more Tensors
* **argnums** (*int** or **Tuple**[**int**]*) -- Optional,
integer or tuple of integers, saying which arguments to get
the Jacobian with respect to. Default: 0.
* **has_aux** (*bool*) -- Flag indicating that "func" returns a
"(output, aux)" tuple where the first element is the output of
the function to be differentiated and the second element is
| https://pytorch.org/docs/stable/generated/torch.func.jacrev.html | pytorch docs |
auxiliary objects that will not be differentiated. Default:
False.
* **chunk_size** (*None** or **int*) -- If None (default), use
the maximum chunk size (equivalent to doing a single vmap over
vjp to compute the jacobian). If 1, then compute the jacobian
row-by-row with a for-loop. If not None, then compute the
jacobian "chunk_size" rows at a time (equivalent to doing
multiple vmap over vjp). If you run into memory issues
computing the jacobian, please try to specify a non-None
chunk_size.
Returns:
Returns a function that takes in the same inputs as "func" and
returns the Jacobian of "func" with respect to the arg(s) at
"argnums". If "has_aux is True", then the returned function
instead returns a "(jacobian, aux)" tuple where "jacobian" is
the Jacobian and "aux" is auxiliary objects returned by "func".
A basic usage with a pointwise, unary operation will give a
diagonal array as the Jacobian | https://pytorch.org/docs/stable/generated/torch.func.jacrev.html | pytorch docs |
diagonal array as the Jacobian
from torch.func import jacrev
x = torch.randn(5)
jacobian = jacrev(torch.sin)(x)
expected = torch.diag(torch.cos(x))
assert torch.allclose(jacobian, expected)
If you would like to compute the output of the function as well as
the jacobian of the function, use the "has_aux" flag to return the
output as an auxiliary object:
from torch.func import jacrev
x = torch.randn(5)
def f(x):
return x.sin()
def g(x):
result = f(x)
return result, result
jacobian_f, f_x = jacrev(g, has_aux=True)(x)
assert torch.allclose(f_x, f(x))
"jacrev()" can be composed with vmap to produce batched Jacobians:
from torch.func import jacrev, vmap
x = torch.randn(64, 5)
jacobian = vmap(jacrev(torch.sin))(x)
assert jacobian.shape == (64, 5, 5)
Additionally, "jacrev()" can be composed with itself to produce
Hessians | https://pytorch.org/docs/stable/generated/torch.func.jacrev.html | pytorch docs |
Hessians
from torch.func import jacrev
def f(x):
return x.sin().sum()
x = torch.randn(5)
hessian = jacrev(jacrev(f))(x)
assert torch.allclose(hessian, torch.diag(-x.sin()))
By default, "jacrev()" computes the Jacobian with respect to the
first input. However, it can compute the Jacboian with respect to a
different argument by using "argnums":
from torch.func import jacrev
def f(x, y):
return x + y ** 2
x, y = torch.randn(5), torch.randn(5)
jacobian = jacrev(f, argnums=1)(x, y)
expected = torch.diag(2 * y)
assert torch.allclose(jacobian, expected)
Additionally, passing a tuple to "argnums" will compute the
Jacobian with respect to multiple arguments
from torch.func import jacrev
def f(x, y):
return x + y ** 2
x, y = torch.randn(5), torch.randn(5)
jacobian = jacrev(f, argnums=(0, 1))(x, y)
expectedX = torch.diag(torch.ones_like(x))
| https://pytorch.org/docs/stable/generated/torch.func.jacrev.html | pytorch docs |
expectedX = torch.diag(torch.ones_like(x))
expectedY = torch.diag(2 * y)
assert torch.allclose(jacobian[0], expectedX)
assert torch.allclose(jacobian[1], expectedY)
Note:
Using PyTorch "torch.no_grad" together with "jacrev". Case 1:
Using "torch.no_grad" inside a function:
>>> def f(x):
>>> with torch.no_grad():
>>> c = x ** 2
>>> return x - c
In this case, "jacrev(f)(x)" will respect the inner
"torch.no_grad".Case 2: Using "jacrev" inside "torch.no_grad"
context manager:
>>> with torch.no_grad():
>>> jacrev(f)(x)
In this case, "jacrev" will respect the inner "torch.no_grad",
but not the outer one. This is because "jacrev" is a "function
transform": its result should not depend on the result of a
context manager outside of "f".
| https://pytorch.org/docs/stable/generated/torch.func.jacrev.html | pytorch docs |
Conv2d
class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)
Applies a 2D convolution over an input signal composed of several
input planes.
In the simplest case, the output value of the layer with input size
(N, C_{\text{in}}, H, W) and output (N, C_{\text{out}},
H_{\text{out}}, W_{\text{out}}) can be precisely described as:
\text{out}(N_i, C_{\text{out}_j}) =
\text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1}
\text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)
where \star is the valid 2D cross-correlation operator, N is a
batch size, C denotes a number of channels, H is a height of input
planes in pixels, and W is width in pixels.
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward. | https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
use different precision for backward.
"stride" controls the stride for the cross-correlation, a single
number or a tuple.
"padding" controls the amount of padding applied to the input. It
can be either a string {'valid', 'same'} or an int / a tuple of
ints giving the amount of implicit padding applied on both sides.
"dilation" controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but
this link has a nice visualization of what "dilation" does.
"groups" controls the connections between inputs and outputs.
"in_channels" and "out_channels" must both be divisible by
"groups". For example,
* At groups=1, all inputs are convolved to all outputs.
* At groups=2, the operation becomes equivalent to having two
conv layers side by side, each seeing half the input
channels and producing half the output channels, and both
subsequently concatenated.
| https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
subsequently concatenated.
* At groups= "in_channels", each input channel is convolved
with its own set of filters (of size
\frac{\text{out\_channels}}{\text{in\_channels}}).
The parameters "kernel_size", "stride", "padding", "dilation" can
either be:
* a single "int" -- in which case the same value is used for the
height and width dimension
* a "tuple" of two ints -- in which case, the first *int* is
used for the height dimension, and the second *int* for the
width dimension
Note:
When *groups == in_channels* and *out_channels == K *
in_channels*, where *K* is a positive integer, this operation is
also known as a "depthwise convolution".In other words, for an
input of size (N, C_{in}, L_{in}), a depthwise convolution with a
depthwise multiplier *K* can be performed with the arguments
(C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times
\text{K}, ..., \text{groups}=C_\text{in}).
| https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
\text{K}, ..., \text{groups}=C_\text{in}).
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Note:
"padding='valid'" is the same as no padding. "padding='same'"
pads the input so the output has the shape as the input. However,
this mode doesn't support any stride values other than 1.
Note:
This module supports complex data types i.e. "complex32,
complex64, complex128".
Parameters:
* in_channels (int) -- Number of channels in the input
image
* **out_channels** (*int*) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
| https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int**, **tuple** or **str**, **optional*) --
Padding added to all four sides of the input. Default: 0
* **padding_mode** (*str**, **optional*) -- "'zeros'",
"'reflect'", "'replicate'" or "'circular'". Default: "'zeros'"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
Shape:
* Input: (N, C_{in}, H_{in}, W_{in}) or (C_{in}, H_{in}, W_{in})
* Output: (N, C_{out}, H_{out}, W_{out}) or (C_{out}, H_{out},
W_{out}), where
H_{out} = \left\lfloor\frac{H_{in} + 2 \times
| https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
\text{padding}[0] - \text{dilation}[0] \times
(\text{kernel_size}[0] - 1) - 1}{\text{stride}[0]} +
1\right\rfloor
W_{out} = \left\lfloor\frac{W_{in} + 2 \times
\text{padding}[1] - \text{dilation}[1] \times
(\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} +
1\right\rfloor
Variables:
* weight (Tensor) -- the learnable weights of the module
of shape (\text{out_channels},
\frac{\text{in_channels}}{\text{groups}},
\text{kernel_size[0]}, \text{kernel_size[1]}). The values of
these weights are sampled from \mathcal{U}(-\sqrt{k},
\sqrt{k}) where k = \frac{groups}{C_\text{in} *
\prod_{i=0}^{1}\text{kernel_size}[i]}
* **bias** (*Tensor*) -- the learnable bias of the module of
shape (out_channels). If "bias" is "True", then the values of
these weights are sampled from \mathcal{U}(-\sqrt{k},
| https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
\sqrt{k}) where k = \frac{groups}{C_\text{in} *
\prod_{i=0}^{1}\text{kernel_size}[i]}
-[ Examples ]-
With square kernels and equal stride
m = nn.Conv2d(16, 33, 3, stride=2)
non-square kernels and unequal stride and with padding
m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
non-square kernels and unequal stride and with padding and dilation
m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
input = torch.randn(20, 16, 50, 100)
output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html | pytorch docs |
torch.Tensor.detach_
Tensor.detach_()
Detaches the Tensor from the graph that created it, making it a
leaf. Views cannot be detached in-place.
This method also affects forward mode AD gradients and the result
will never have forward mode AD gradients. | https://pytorch.org/docs/stable/generated/torch.Tensor.detach_.html | pytorch docs |
torch.mode
torch.mode(input, dim=- 1, keepdim=False, *, out=None)
Returns a namedtuple "(values, indices)" where "values" is the mode
value of each row of the "input" tensor in the given dimension
"dim", i.e. a value which appears most often in that row, and
"indices" is the index location of each mode value found.
By default, "dim" is the last dimension of the "input" tensor.
If "keepdim" is "True", the output tensors are of the same size as
"input" except in the dimension "dim" where they are of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensors having 1 fewer dimension than "input".
Note:
This function is not defined for "torch.cuda.Tensor" yet.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.mode.html | pytorch docs |
retained or not.
Keyword Arguments:
out (tuple, optional) -- the result tuple of two
output tensors (values, indices)
Example:
>>> a = torch.randint(10, (5,))
>>> a
tensor([6, 5, 1, 0, 2])
>>> b = a + (torch.randn(50, 1) * 5).long()
>>> torch.mode(b, 0)
torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2]))
| https://pytorch.org/docs/stable/generated/torch.mode.html | pytorch docs |
torch.signal.windows.cosine
torch.signal.windows.cosine(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes a window with a simple cosine waveform. Also known as the
sine window.
The cosine window is defined as follows:
w_n = \cos{\left(\frac{\pi n}{M} - \frac{\pi}{2}\right)} =
\sin{\left(\frac{\pi n}{M}\right)}
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* sym (bool, optional) -- If False, returns a
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
| https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html | pytorch docs |
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric cosine window.
>>> torch.signal.windows.cosine(10)
tensor([0.1564, 0.4540, 0.7071, 0.8910, 0.9877, 0.9877, 0.8910, 0.7071, 0.4540, 0.1564])
>>> # Generates a periodic cosine window.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html | pytorch docs |
Generates a periodic cosine window.
>>> torch.signal.windows.cosine(10, sym=False)
tensor([0.1423, 0.4154, 0.6549, 0.8413, 0.9595, 1.0000, 0.9595, 0.8413, 0.6549, 0.4154])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html | pytorch docs |
torch.pow
torch.pow(input, exponent, *, out=None) -> Tensor
Takes the power of each element in "input" with "exponent" and
returns a tensor with the result.
"exponent" can be either a single "float" number or a Tensor with
the same number of elements as "input".
When "exponent" is a scalar value, the operation applied is:
\text{out}_i = x_i ^ \text{exponent}
When "exponent" is a tensor, the operation applied is:
\text{out}_i = x_i ^ {\text{exponent}_i}
When "exponent" is a tensor, the shapes of "input" and "exponent"
must be broadcastable.
Parameters:
* input (Tensor) -- the input tensor.
* **exponent** (*float** or **tensor*) -- the exponent value
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([ 0.4331, 1.2475, 0.6834, -0.2791])
>>> torch.pow(a, 2)
tensor([ 0.1875, 1.5561, 0.4670, 0.0779])
| https://pytorch.org/docs/stable/generated/torch.pow.html | pytorch docs |
tensor([ 0.1875, 1.5561, 0.4670, 0.0779])
>>> exp = torch.arange(1., 5.)
>>> a = torch.arange(1., 5.)
>>> a
tensor([ 1., 2., 3., 4.])
>>> exp
tensor([ 1., 2., 3., 4.])
>>> torch.pow(a, exp)
tensor([ 1., 4., 27., 256.])
torch.pow(self, exponent, *, out=None) -> Tensor
"self" is a scalar "float" value, and "exponent" is a tensor. The
returned tensor "out" is of the same shape as "exponent"
The operation applied is:
\text{out}_i = \text{self} ^ {\text{exponent}_i}
Parameters:
* self (float) -- the scalar base value for the power
operation
* **exponent** (*Tensor*) -- the exponent tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> exp = torch.arange(1., 5.)
>>> base = 2
>>> torch.pow(base, exp)
tensor([ 2., 4., 8., 16.])
| https://pytorch.org/docs/stable/generated/torch.pow.html | pytorch docs |
torch.logsumexp
torch.logsumexp(input, dim, keepdim=False, *, out=None)
Returns the log of summed exponentials of each row of the "input"
tensor in the given dimension "dim". The computation is numerically
stabilized.
For summation index j given by dim and other indices i, the
result is
\text{logsumexp}(x)_{i} = \log \sum_j \exp(x_{ij})
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- the
dimension or dimensions to reduce. If "None", all dimensions
are reduced.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.logsumexp.html | pytorch docs |
retained or not.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(3, 3)
>>> torch.logsumexp(a, 1)
tensor([1.4907, 1.0593, 1.5696])
>>> torch.dist(torch.logsumexp(a, 1), torch.log(torch.sum(torch.exp(a), 1)))
tensor(1.6859e-07)
| https://pytorch.org/docs/stable/generated/torch.logsumexp.html | pytorch docs |
torch.Tensor.clamp
Tensor.clamp(min=None, max=None) -> Tensor
See "torch.clamp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.clamp.html | pytorch docs |
torch.Tensor.cdouble
Tensor.cdouble(memory_format=torch.preserve_format) -> Tensor
"self.cdouble()" is equivalent to "self.to(torch.complex128)". See
"to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.cdouble.html | pytorch docs |
torch.Tensor.inverse
Tensor.inverse() -> Tensor
See "torch.inverse()" | https://pytorch.org/docs/stable/generated/torch.Tensor.inverse.html | pytorch docs |
torch.nn.functional.triplet_margin_loss
torch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')
See "TripletMarginLoss" for details
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.triplet_margin_loss.html | pytorch docs |
torch.Tensor.grad
Tensor.grad
This attribute is "None" by default and becomes a Tensor the first
time a call to "backward()" computes gradients for "self". The
attribute will then contain the gradients computed and future calls
to "backward()" will accumulate (add) gradients into it. | https://pytorch.org/docs/stable/generated/torch.Tensor.grad.html | pytorch docs |
torch.Tensor.sigmoid_
Tensor.sigmoid_() -> Tensor
In-place version of "sigmoid()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid_.html | pytorch docs |
torch.Tensor.bincount
Tensor.bincount(weights=None, minlength=0) -> Tensor
See "torch.bincount()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bincount.html | pytorch docs |
torch.cuda.memory_cached
torch.cuda.memory_cached(device=None)
Deprecated; see "memory_reserved()".
Return type:
int | https://pytorch.org/docs/stable/generated/torch.cuda.memory_cached.html | pytorch docs |
torch.Tensor.short
Tensor.short(memory_format=torch.preserve_format) -> Tensor
"self.short()" is equivalent to "self.to(torch.int16)". See "to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.short.html | pytorch docs |
torch.cuda.set_rng_state
torch.cuda.set_rng_state(new_state, device='cuda')
Sets the random number generator state of the specified GPU.
Parameters:
* new_state (torch.ByteTensor) -- The desired state
* **device** (*torch.device** or **int**, **optional*) -- The
device to set the RNG state. Default: "'cuda'" (i.e.,
"torch.device('cuda')", the current CUDA device).
| https://pytorch.org/docs/stable/generated/torch.cuda.set_rng_state.html | pytorch docs |
torch.unbind
torch.unbind(input, dim=0) -> seq
Removes a tensor dimension.
Returns a tuple of all slices along a given dimension, already
without it.
Parameters:
* input (Tensor) -- the tensor to unbind
* **dim** (*int*) -- dimension to remove
Example:
>>> torch.unbind(torch.tensor([[1, 2, 3],
>>> [4, 5, 6],
>>> [7, 8, 9]]))
(tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))
| https://pytorch.org/docs/stable/generated/torch.unbind.html | pytorch docs |
torch.Tensor.logical_xor
Tensor.logical_xor() -> Tensor
See "torch.logical_xor()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_xor.html | pytorch docs |
LnStructured
class torch.nn.utils.prune.LnStructured(amount, n, dim=- 1)
Prune entire (currently unpruned) channels in a tensor based on
their L"n"-norm.
Parameters:
* amount (int or float) -- quantity of channels to
prune. If "float", should be between 0.0 and 1.0 and represent
the fraction of parameters to prune. If "int", it represents
the absolute number of parameters to prune.
* **n** (*int**, **float**, **inf**, **-inf**, **'fro'**,
**'nuc'*) -- See documentation of valid entries for argument
"p" in "torch.norm()".
* **dim** (*int**, **optional*) -- index of the dim along which
we define channels to prune. Default: -1.
classmethod apply(module, name, amount, n, dim, importance_scores=None)
Adds the forward pre-hook that enables pruning on the fly and
the reparametrization of a tensor in terms of the original
tensor and the pruning mask.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html | pytorch docs |
Parameters:
* module (nn.Module) -- module containing the tensor to
prune
* **name** (*str*) -- parameter name within "module" on which
pruning will act.
* **amount** (*int** or **float*) -- quantity of parameters
to prune. If "float", should be between 0.0 and 1.0 and
represent the fraction of parameters to prune. If "int", it
represents the absolute number of parameters to prune.
* **n** (*int**, **float**, **inf**, **-inf**, **'fro'**,
**'nuc'*) -- See documentation of valid entries for
argument "p" in "torch.norm()".
* **dim** (*int*) -- index of the dim along which we define
channels to prune.
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as module parameter) used
to compute mask for pruning. The values in this tensor
indicate the importance of the corresponding elements in
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html | pytorch docs |
the parameter being pruned. If unspecified or None, the
module parameter will be used in its place.
apply_mask(module)
Simply handles the multiplication between the parameter being
pruned and the generated mask. Fetches the mask and the original
tensor from the module and returns the pruned version of the
tensor.
Parameters:
**module** (*nn.Module*) -- module containing the tensor to
prune
Returns:
pruned version of the input tensor
Return type:
pruned_tensor (torch.Tensor)
compute_mask(t, default_mask)
Computes and returns a mask for the input tensor "t". Starting
from a base "default_mask" (which should be a mask of ones if
the tensor has not been pruned yet), generate a mask to apply on
top of the "default_mask" by zeroing out the channels along the
specified dim with the lowest L"n"-norm.
Parameters:
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html | pytorch docs |
Parameters:
* t (torch.Tensor) -- tensor representing the parameter
to prune
* **default_mask** (*torch.Tensor*) -- Base mask from
previous pruning iterations, that need to be respected
after the new mask is applied. Same dims as "t".
Returns:
mask to apply to "t", of same dims as "t"
Return type:
mask (torch.Tensor)
Raises:
**IndexError** -- if "self.dim >= len(t.shape)"
prune(t, default_mask=None, importance_scores=None)
Computes and returns a pruned version of input tensor "t"
according to the pruning rule specified in "compute_mask()".
Parameters:
* **t** (*torch.Tensor*) -- tensor to prune (of same
dimensions as "default_mask").
* **importance_scores** (*torch.Tensor*) -- tensor of
importance scores (of same shape as "t") used to compute
mask for pruning "t". The values in this tensor indicate
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html | pytorch docs |
the importance of the corresponding elements in the "t"
that is being pruned. If unspecified or None, the tensor
"t" will be used in its place.
* **default_mask** (*torch.Tensor**, **optional*) -- mask
from previous pruning iteration, if any. To be considered
when determining what portion of the tensor that pruning
should act on. If None, default to a mask of ones.
Returns:
pruned version of tensor "t".
remove(module)
Removes the pruning reparameterization from a module. The pruned
parameter named "name" remains permanently pruned, and the
parameter named "name+'_orig'" is removed from the parameter
list. Similarly, the buffer named "name+'_mask'" is removed from
the buffers.
Note:
Pruning itself is NOT undone or reversed!
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html | pytorch docs |
torch.Tensor.nanmean
Tensor.nanmean(dim=None, keepdim=False, *, dtype=None) -> Tensor
See "torch.nanmean()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nanmean.html | pytorch docs |
torch.Tensor.half
Tensor.half(memory_format=torch.preserve_format) -> Tensor
"self.half()" is equivalent to "self.to(torch.float16)". See
"to()".
Parameters:
memory_format ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format". | https://pytorch.org/docs/stable/generated/torch.Tensor.half.html | pytorch docs |
torch.Tensor.nextafter
Tensor.nextafter(other) -> Tensor
See "torch.nextafter()" | https://pytorch.org/docs/stable/generated/torch.Tensor.nextafter.html | pytorch docs |
torch.Tensor.acosh_
Tensor.acosh_() -> Tensor
In-place version of "acosh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.acosh_.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.