text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.dsplit
torch.dsplit(input, indices_or_sections) -> List of Tensors
Splits "input", a tensor with three or more dimensions, into
multiple tensors depthwise according to "indices_or_sections". Each
split is a view of "input".
This is equivalent to calling torch.tensor_split(input,
indices_or_sections, dim=2) (the split dimension is 2), except that
if "indices_or_sections" is an integer it must evenly divide the
split dimension or a runtime error will be thrown.
This function is based on NumPy's "numpy.dsplit()".
Parameters:
* input (Tensor) -- tensor to split.
* **indices_or_sections** (*int** or **list** or **tuple of
ints*) -- See argument in "torch.tensor_split()".
Example::
>>> t = torch.arange(16.0).reshape(2, 2, 4)
>>> t
tensor([[[ 0., 1., 2., 3.],
[ 4., 5., 6., 7.]],
[[ 8., 9., 10., 11.],
[12., 13., 14., 15.]]])
>>> torch.dsplit(t, 2) | https://pytorch.org/docs/stable/generated/torch.dsplit.html | pytorch docs |
torch.dsplit(t, 2)
(tensor([[[ 0., 1.],
[ 4., 5.]],
[[ 8., 9.],
[12., 13.]]]),
tensor([[[ 2., 3.],
[ 6., 7.]],
[[10., 11.],
[14., 15.]]]))
>>> torch.dsplit(t, [3, 6])
(tensor([[[ 0., 1., 2.],
[ 4., 5., 6.]],
[[ 8., 9., 10.],
[12., 13., 14.]]]),
tensor([[[ 3.],
[ 7.]],
[[11.],
[15.]]]),
tensor([], size=(2, 2, 0)))
| https://pytorch.org/docs/stable/generated/torch.dsplit.html | pytorch docs |
torch._foreach_log1p
torch._foreach_log1p(self: List[Tensor]) -> List[Tensor]
Apply "torch.log1p()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_log1p.html | pytorch docs |
torch.linalg.matrix_rank
torch.linalg.matrix_rank(A, *, atol=None, rtol=None, hermitian=False, out=None) -> Tensor
Computes the numerical rank of a matrix.
The matrix rank is computed as the number of singular values (or
eigenvalues in absolute value when "hermitian"= True) that are
greater than \max(\text{atol}, \sigma_1 * \text{rtol}) threshold,
where \sigma_1 is the largest singular value (or eigenvalue).
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
If "hermitian"= True, "A" is assumed to be Hermitian if complex
or symmetric if real, but this is not checked internally. Instead,
just the lower triangular part of the matrix is used in the
computations.
If "rtol" is not specified and "A" is a matrix of dimensions (m,
n), the relative tolerance is set to be \text{rtol} = \max(m, n) | https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html | pytorch docs |
\varepsilon and \varepsilon is the epsilon value for the dtype of
"A" (see "finfo"). If "rtol" is not specified and "atol" is
specified to be larger than zero then "rtol" is set to zero.
If "atol" or "rtol" is a "torch.Tensor", its shape must be
broadcastable to that of the singular values of "A" as returned by
"torch.linalg.svdvals()".
Note:
This function has NumPy compatible variant *linalg.matrix_rank(A,
tol, hermitian=False)*. However, use of the positional argument
"tol" is deprecated in favor of "atol" and "rtol".
Note:
The matrix rank is computed using a singular value decomposition
"torch.linalg.svdvals()" if "hermitian"*= False* (default) and
the eigenvalue decomposition "torch.linalg.eigvalsh()" when
"hermitian"*= True*. When inputs are on a CUDA device, this
function synchronizes that device with the CPU.
Parameters:
* A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions. | https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html | pytorch docs |
zero or more batch dimensions.
* **tol** (*float**, **Tensor**, **optional*) -- [NumPy Compat]
Alias for "atol". Default: *None*.
Keyword Arguments:
* atol (float, Tensor, optional) -- the absolute
tolerance value. When None it's considered to be zero.
Default: None.
* **rtol** (*float**, **Tensor**, **optional*) -- the relative
tolerance value. See above for the value it takes when *None*.
Default: *None*.
* **hermitian** (*bool*) -- indicates whether "A" is Hermitian
if complex or symmetric if real. Default: *False*.
* **out** (*Tensor**, **optional*) -- output tensor. Ignored if
*None*. Default: *None*.
Examples:
>>> A = torch.eye(10)
>>> torch.linalg.matrix_rank(A)
tensor(10)
>>> B = torch.eye(10)
>>> B[0, 0] = 0
>>> torch.linalg.matrix_rank(B)
tensor(9)
>>> A = torch.randn(4, 3, 2)
>>> torch.linalg.matrix_rank(A)
| https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html | pytorch docs |
torch.linalg.matrix_rank(A)
tensor([2, 2, 2, 2])
>>> A = torch.randn(2, 4, 2, 3)
>>> torch.linalg.matrix_rank(A)
tensor([[2, 2, 2, 2],
[2, 2, 2, 2]])
>>> A = torch.randn(2, 4, 3, 3, dtype=torch.complex64)
>>> torch.linalg.matrix_rank(A)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(A, hermitian=True)
tensor([[3, 3, 3, 3],
[3, 3, 3, 3]])
>>> torch.linalg.matrix_rank(A, atol=1.0, rtol=0.0)
tensor([[3, 2, 2, 2],
[1, 2, 1, 2]])
>>> torch.linalg.matrix_rank(A, atol=1.0, rtol=0.0, hermitian=True)
tensor([[2, 2, 2, 1],
[1, 2, 2, 2]])
| https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html | pytorch docs |
torch.from_numpy
torch.from_numpy(ndarray) -> Tensor
Creates a "Tensor" from a "numpy.ndarray".
The returned tensor and "ndarray" share the same memory.
Modifications to the tensor will be reflected in the "ndarray" and
vice versa. The returned tensor is not resizable.
It currently accepts "ndarray" with dtypes of "numpy.float64",
"numpy.float32", "numpy.float16", "numpy.complex64",
"numpy.complex128", "numpy.int64", "numpy.int32", "numpy.int16",
"numpy.int8", "numpy.uint8", and "numpy.bool".
Warning:
Writing to a tensor created from a read-only NumPy array is not
supported and will result in undefined behavior.
Example:
>>> a = numpy.array([1, 2, 3])
>>> t = torch.from_numpy(a)
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
| https://pytorch.org/docs/stable/generated/torch.from_numpy.html | pytorch docs |
torch.diag_embed
torch.diag_embed(input, offset=0, dim1=- 2, dim2=- 1) -> Tensor
Creates a tensor whose diagonals of certain 2D planes (specified by
"dim1" and "dim2") are filled by "input". To facilitate creating
batched diagonal matrices, the 2D planes formed by the last two
dimensions of the returned tensor are chosen by default.
The argument "offset" controls which diagonal to consider:
If "offset" = 0, it is the main diagonal.
If "offset" > 0, it is above the main diagonal.
If "offset" < 0, it is below the main diagonal.
The size of the new matrix will be calculated to make the specified
diagonal of the size of the last input dimension. Note that for
"offset" other than 0, the order of "dim1" and "dim2" matters.
Exchanging them is equivalent to changing the sign of "offset".
Applying "torch.diagonal()" to the output of this function with the
same arguments yields a matrix identical to input. However, | https://pytorch.org/docs/stable/generated/torch.diag_embed.html | pytorch docs |
"torch.diagonal()" has different default dimensions, so those need
to be explicitly specified.
Parameters:
* input (Tensor) -- the input tensor. Must be at least
1-dimensional.
* **offset** (*int**, **optional*) -- which diagonal to
consider. Default: 0 (main diagonal).
* **dim1** (*int**, **optional*) -- first dimension with respect
to which to take diagonal. Default: -2.
* **dim2** (*int**, **optional*) -- second dimension with
respect to which to take diagonal. Default: -1.
Example:
>>> a = torch.randn(2, 3)
>>> torch.diag_embed(a)
tensor([[[ 1.5410, 0.0000, 0.0000],
[ 0.0000, -0.2934, 0.0000],
[ 0.0000, 0.0000, -2.1788]],
[[ 0.5684, 0.0000, 0.0000],
[ 0.0000, -1.0845, 0.0000],
[ 0.0000, 0.0000, -1.3986]]])
>>> torch.diag_embed(a, offset=1, dim1=0, dim2=2)
tensor([[[ 0.0000, 1.5410, 0.0000, 0.0000],
| https://pytorch.org/docs/stable/generated/torch.diag_embed.html | pytorch docs |
[ 0.0000, 0.5684, 0.0000, 0.0000]],
[[ 0.0000, 0.0000, -0.2934, 0.0000],
[ 0.0000, 0.0000, -1.0845, 0.0000]],
[[ 0.0000, 0.0000, 0.0000, -2.1788],
[ 0.0000, 0.0000, 0.0000, -1.3986]],
[[ 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000]]])
| https://pytorch.org/docs/stable/generated/torch.diag_embed.html | pytorch docs |
torch.Tensor.count_nonzero
Tensor.count_nonzero(dim=None) -> Tensor
See "torch.count_nonzero()" | https://pytorch.org/docs/stable/generated/torch.Tensor.count_nonzero.html | pytorch docs |
torch.Tensor.take_along_dim
Tensor.take_along_dim(indices, dim) -> Tensor
See "torch.take_along_dim()" | https://pytorch.org/docs/stable/generated/torch.Tensor.take_along_dim.html | pytorch docs |
torch.optim.Optimizer.load_state_dict
Optimizer.load_state_dict(state_dict)
Loads the optimizer state.
Parameters:
state_dict (dict) -- optimizer state. Should be an object
returned from a call to "state_dict()". | https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.load_state_dict.html | pytorch docs |
torch.nn.functional.relu6
torch.nn.functional.relu6(input, inplace=False) -> Tensor
Applies the element-wise function \text{ReLU6}(x) = \min(\max(0,x),
6).
See "ReLU6" for more details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.relu6.html | pytorch docs |
torch.cuda.memory_allocated
torch.cuda.memory_allocated(device=None)
Returns the current GPU memory occupied by tensors in bytes for a
given device.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
int
Note:
This is likely less than the amount shown in *nvidia-smi* since
some unused memory can be held by the caching allocator and some
context needs to be created on GPU. See Memory management for
more details about GPU memory management.
| https://pytorch.org/docs/stable/generated/torch.cuda.memory_allocated.html | pytorch docs |
torch.Tensor.not_equal_
Tensor.not_equal_(other) -> Tensor
In-place version of "not_equal()". | https://pytorch.org/docs/stable/generated/torch.Tensor.not_equal_.html | pytorch docs |
torch.atleast_3d
torch.atleast_3d(*tensors)
Returns a 3-dimensional view of each input tensor with zero
dimensions. Input tensors with three or more dimensions are
returned as-is.
Parameters:
input (Tensor or list of Tensors) --
Returns:
output (Tensor or tuple of Tensors)
-[ Example ]-
x = torch.tensor(0.5)
x
tensor(0.5000)
torch.atleast_3d(x)
tensor([[[0.5000]]])
y = torch.arange(4).view(2, 2)
y
tensor([[0, 1],
[2, 3]])
torch.atleast_3d(y)
tensor([[[0],
[1]],
[[2],
[3]]])
x = torch.tensor(1).view(1, 1, 1)
x
tensor([[[1]]])
torch.atleast_3d(x)
tensor([[[1]]])
x = torch.tensor(0.5)
y = torch.tensor(1.)
torch.atleast_3d((x, y))
(tensor([[[0.5000]]]), tensor([[[1.]]]))
| https://pytorch.org/docs/stable/generated/torch.atleast_3d.html | pytorch docs |
torch.cummin
torch.cummin(input, dim, *, out=None)
Returns a namedtuple "(values, indices)" where "values" is the
cumulative minimum of elements of "input" in the dimension "dim".
And "indices" is the index location of each maximum value found in
the dimension "dim".
y_i = min(x_1, x_2, x_3, \dots, x_i)
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to do the operation over
Keyword Arguments:
out (tuple, optional) -- the result tuple of two
output tensors (values, indices)
Example:
>>> a = torch.randn(10)
>>> a
tensor([-0.2284, -0.6628, 0.0975, 0.2680, -1.3298, -0.4220, -0.3885, 1.1762,
0.9165, 1.6684])
>>> torch.cummin(a, dim=0)
torch.return_types.cummin(
values=tensor([-0.2284, -0.6628, -0.6628, -0.6628, -1.3298, -1.3298, -1.3298, -1.3298,
-1.3298, -1.3298]),
indices=tensor([0, 1, 1, 1, 4, 4, 4, 4, 4, 4]))
| https://pytorch.org/docs/stable/generated/torch.cummin.html | pytorch docs |
torch.cuda.set_rng_state_all
torch.cuda.set_rng_state_all(new_states)
Sets the random number generator state of all devices.
Parameters:
new_states (Iterable of torch.ByteTensor) -- The desired
state for each device | https://pytorch.org/docs/stable/generated/torch.cuda.set_rng_state_all.html | pytorch docs |
torch.Tensor.deg2rad
Tensor.deg2rad() -> Tensor
See "torch.deg2rad()" | https://pytorch.org/docs/stable/generated/torch.Tensor.deg2rad.html | pytorch docs |
torch.ldexp
torch.ldexp(input, other, *, out=None) -> Tensor
Multiplies "input" by 2 ** "other".
\text{{out}}_i = \text{{input}}_i * 2^\text{{other}}_i
Typically this function is used to construct floating point numbers
by multiplying mantissas in "input" with integral powers of two
created from the exponents in "other".
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- a tensor of exponents, typically
integers.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.ldexp(torch.tensor([1.]), torch.tensor([1]))
tensor([2.])
>>> torch.ldexp(torch.tensor([1.0]), torch.tensor([1, 2, 3, 4]))
tensor([ 2., 4., 8., 16.])
| https://pytorch.org/docs/stable/generated/torch.ldexp.html | pytorch docs |
Sigmoid
class torch.nn.Sigmoid
Applies the element-wise function:
\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Sigmoid()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html | pytorch docs |
torch.cuda.graph_pool_handle
torch.cuda.graph_pool_handle()
Returns an opaque token representing the id of a graph memory pool.
See Graph memory management.
Warning:
This API is in beta and may change in future releases.
| https://pytorch.org/docs/stable/generated/torch.cuda.graph_pool_handle.html | pytorch docs |
torch.Tensor.roll
Tensor.roll(shifts, dims) -> Tensor
See "torch.roll()" | https://pytorch.org/docs/stable/generated/torch.Tensor.roll.html | pytorch docs |
torch.jit.enable_onednn_fusion
torch.jit.enable_onednn_fusion(enabled)
Enables or disables onednn JIT fusion based on the parameter
enabled. | https://pytorch.org/docs/stable/generated/torch.jit.enable_onednn_fusion.html | pytorch docs |
ReLU6
class torch.nn.ReLU6(inplace=False)
Applies the element-wise function:
\text{ReLU6}(x) = \min(\max(0,x), 6)
Parameters:
inplace (bool) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.ReLU6()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.ReLU6.html | pytorch docs |
adaptive_avg_pool2d
class torch.ao.nn.quantized.functional.adaptive_avg_pool2d(input, output_size)
Applies a 2D adaptive average pooling over a quantized input signal
composed of several quantized input planes.
Note:
The input quantization parameters propagate to the output.
See "AdaptiveAvgPool2d" for details and output shape.
Parameters:
output_size (None) -- the target output size (single
integer or double-integer tuple)
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.adaptive_avg_pool2d.html | pytorch docs |
torch.Tensor.atanh_
Tensor.atanh_(other) -> Tensor
In-place version of "atanh()" | https://pytorch.org/docs/stable/generated/torch.Tensor.atanh_.html | pytorch docs |
DistributedDataParallel
class torch.nn.parallel.DistributedDataParallel(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False, gradient_as_bucket_view=False, static_graph=False)
Implements distributed data parallelism that is based on
"torch.distributed" package at the module level.
This container provides data parallelism by synchronizing gradients
across each model replica. The devices to synchronize across are
specified by the input "process_group", which is the entire world
by default. Note that "DistributedDataParallel" does not chunk or
otherwise shard the input across participating GPUs; the user is
responsible for defining how to do so, for example through the use
of a "DistributedSampler".
See also: Basics and Use nn.parallel.DistributedDataParallel
instead of multiprocessing or nn.DataParallel. The same constraints | https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
on input as in "torch.nn.DataParallel" apply.
Creation of this class requires that "torch.distributed" to be
already initialized, by calling
"torch.distributed.init_process_group()".
"DistributedDataParallel" is proven to be significantly faster than
"torch.nn.DataParallel" for single-node multi-GPU data parallel
training.
To use "DistributedDataParallel" on a host with N GPUs, you should
spawn up "N" processes, ensuring that each process exclusively
works on a single GPU from 0 to N-1. This can be done by either
setting "CUDA_VISIBLE_DEVICES" for every process or by calling:
torch.cuda.set_device(i)
where i is from 0 to N-1. In each process, you should refer the
following to construct this module:
torch.distributed.init_process_group(
backend='nccl', world_size=N, init_method='...'
)
model = DistributedDataParallel(model, device_ids=[i], output_device=i)
In order to spawn up multiple processes per node, you can use | https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
either "torch.distributed.launch" or "torch.multiprocessing.spawn".
Note:
Please refer to PyTorch Distributed Overview for a brief
introduction to all features related to distributed training.
Note:
"DistributedDataParallel" can be used in conjunction with
"torch.distributed.optim.ZeroRedundancyOptimizer" to reduce per-
rank optimizer states memory footprint. Please refer to
ZeroRedundancyOptimizer recipe for more details.
Note:
"nccl" backend is currently the fastest and highly recommended
backend when using GPUs. This applies to both single-node and
multi-node distributed training.
Note:
This module also supports mixed-precision distributed training.
This means that your model can have different types of parameters
such as mixed types of "fp16" and "fp32", the gradient reduction
on these mixed types of parameters will just work fine.
Note:
If you use "torch.save" on one process to checkpoint the module,
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
and "torch.load" on some other processes to recover it, make sure
that "map_location" is configured properly for every process.
Without "map_location", "torch.load" would recover the module to
devices where the module was saved from.
Note:
When a model is trained on "M" nodes with "batch=N", the gradient
will be "M" times smaller when compared to the same model trained
on a single node with "batch=M*N" if the loss is summed (NOT
averaged as usual) across instances in a batch (because the
gradients between different nodes are averaged). You should take
this into consideration when you want to obtain a mathematically
equivalent training process compared to the local training
counterpart. But in most cases, you can just treat a
DistributedDataParallel wrapped model, a DataParallel wrapped
model and an ordinary model on a single GPU as the same (E.g.
using the same learning rate for equivalent batch size).
Note: | https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
Note:
Parameters are never broadcast between processes. The module
performs an all-reduce step on gradients and assumes that they
will be modified by the optimizer in all processes in the same
way. Buffers (e.g. BatchNorm stats) are broadcast from the module
in process of rank 0, to all other replicas in the system in
every iteration.
Note:
If you are using DistributedDataParallel in conjunction with the
Distributed RPC Framework, you should always use
"torch.distributed.autograd.backward()" to compute gradients and
"torch.distributed.optim.DistributedOptimizer" for optimizing
parameters.Example:
>>> import torch.distributed.autograd as dist_autograd
>>> from torch.nn.parallel import DistributedDataParallel as DDP
>>> import torch
>>> from torch import optim
>>> from torch.distributed.optim import DistributedOptimizer
>>> import torch.distributed.rpc as rpc
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
import torch.distributed.rpc as rpc
>>> from torch.distributed.rpc import RRef
>>>
>>> t1 = torch.rand((3, 3), requires_grad=True)
>>> t2 = torch.rand((3, 3), requires_grad=True)
>>> rref = rpc.remote("worker1", torch.add, args=(t1, t2))
>>> ddp_model = DDP(my_model)
>>>
>>> # Setup optimizer
>>> optimizer_params = [rref]
>>> for param in ddp_model.parameters():
>>> optimizer_params.append(RRef(param))
>>>
>>> dist_optim = DistributedOptimizer(
>>> optim.SGD,
>>> optimizer_params,
>>> lr=0.05,
>>> )
>>>
>>> with dist_autograd.context() as context_id:
>>> pred = ddp_model(rref.to_here())
>>> loss = loss_func(pred, target)
>>> dist_autograd.backward(context_id, [loss])
>>> dist_optim.step(context_id)
Note:
DistributedDataParallel currently offers limited support for
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
gradient checkpointing with "torch.utils.checkpoint()". DDP will
work as expected when there are no unused parameters in the model
and each layer is checkpointed at most once (make sure you are
not passing find_unused_parameters=True to DDP). We currently
do not support the case where a layer is checkpointed multiple
times, or when there unused parameters in the checkpointed model.
Note:
To let a non-DDP model load a state dict from a DDP model,
"consume_prefix_in_state_dict_if_present()" needs to be applied
to strip the prefix "module." in the DDP state dict before
loading.
Warning:
Constructor, forward method, and differentiation of the output
(or a function of the output of this module) are distributed
synchronization points. Take that into account in case different
processes might be executing different code.
Warning:
This module assumes all parameters are registered in the model by
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
the time it is created. No parameters should be added nor removed
later. Same applies to buffers.
Warning:
This module assumes all parameters are registered in the model of
each distributed processes are in the same order. The module
itself will conduct gradient "allreduce" following the reverse
order of the registered parameters of the model. In other words,
it is users' responsibility to ensure that each distributed
process has the exact same model and thus the exact same
parameter registration order.
Warning:
This module allows parameters with non-rowmajor-contiguous
strides. For example, your model may contain some parameters
whose "torch.memory_format" is "torch.contiguous_format" and
others whose format is "torch.channels_last". However,
corresponding parameters in different processes must have the
same strides.
Warning:
This module doesn't work with "torch.autograd.grad()" (i.e. it
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
will only work if gradients are to be accumulated in ".grad"
attributes of parameters).
Warning:
If you plan on using this module with a "nccl" backend or a
"gloo" backend (that uses Infiniband), together with a DataLoader
that uses multiple workers, please change the multiprocessing
start method to "forkserver" (Python 3 only) or "spawn".
Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork
safe, and you will likely experience deadlocks if you don't
change this setting.
Warning:
You should never try to change your model's parameters after
wrapping up your model with "DistributedDataParallel". Because,
when wrapping up your model with "DistributedDataParallel", the
constructor of "DistributedDataParallel" will register the
additional gradient reduction functions on all the parameters of
the model itself at the time of construction. If you change the
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
model's parameters afterwards, gradient reduction functions no
longer match the correct set of parameters.
Warning:
Using "DistributedDataParallel" in conjunction with the
Distributed RPC Framework is experimental and subject to change.
Parameters:
* module (Module) -- module to be parallelized
* **device_ids** (*list of python:int** or **torch.device*) --
CUDA devices. 1) For single-device modules, "device_ids" can
contain exactly one device id, which represents the only CUDA
device where the input module corresponding to this process
resides. Alternatively, "device_ids" can also be "None". 2)
For multi-device modules and CPU modules, "device_ids" must be
"None".
When "device_ids" is "None" for both cases, both the input
data for the forward pass and the actual module must be placed
on the correct device. (default: "None")
* **output_device** (*int** or **torch.device*) -- Device
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
location of output for single-device CUDA modules. For multi-
device modules and CPU modules, it must be "None", and the
module itself dictates the output location. (default:
"device_ids[0]" for single-device modules)
* **broadcast_buffers** (*bool*) -- Flag that enables syncing
(broadcasting) buffers of the module at beginning of the
"forward" function. (default: "True")
* **process_group** -- The process group to be used for
distributed data all-reduction. If "None", the default process
group, which is created by
"torch.distributed.init_process_group()", will be used.
(default: "None")
* **bucket_cap_mb** -- "DistributedDataParallel" will bucket
parameters into multiple buckets so that gradient reduction of
each bucket can potentially overlap with backward computation.
"bucket_cap_mb" controls the bucket size in MegaBytes (MB).
(default: 25)
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
(default: 25)
* **find_unused_parameters** (*bool*) -- Traverse the autograd
graph from all tensors contained in the return value of the
wrapped module's "forward" function. Parameters that don't
receive gradients as part of this graph are preemptively
marked as being ready to be reduced. In addition, parameters
that may have been used in the wrapped module's "forward"
function but were not part of loss computation and thus would
also not receive gradients are preemptively marked as ready to
be reduced. (default: "False")
* **check_reduction** -- This argument is deprecated.
* **gradient_as_bucket_view** (*bool*) -- When set to "True",
gradients will be views pointing to different offsets of
"allreduce" communication buckets. This can reduce peak memory
usage, where the saved memory size will be equal to the total
gradients size. Moreover, it avoids the overhead of copying
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
between gradients and "allreduce" communication buckets. When
gradients are views, "detach_()" cannot be called on the
gradients. If hitting such errors, please fix it by referring
to the "zero_grad()" function in "torch/optim/optimizer.py" as
a solution. Note that gradients will be views after first
iteration, so the peak memory saving should be checked after
first iteration.
* **static_graph** (*bool*) --
When set to "True", DDP knows the trained graph is static.
Static graph means 1) The set of used and unused parameters
will not change during the whole training loop; in this case,
it does not matter whether users set "find_unused_parameters =
True" or not. 2) How the graph is trained will not change
during the whole training loop (meaning there is no control
flow depending on iterations). When static_graph is set to be
"True", DDP will support cases that can not be supported in
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
the past: 1) Reentrant backwards. 2) Activation checkpointing
multiple times. 3) Activation checkpointing when model has
unused parameters. 4) There are model parameters that are
outside of forward function. 5) Potentially improve
performance when there are unused parameters, as DDP will not
search graph in each iteration to detect unused parameters
when static_graph is set to be "True". To check whether you
can set static_graph to be "True", one way is to check ddp
logging data at the end of your previous model training, if
"ddp_logging_data.get("can_set_static_graph") == True", mostly
you can set "static_graph = True" as well.
Example::
>>> model_DDP = torch.nn.parallel.DistributedDataParallel(model)
>>> # Training loop
>>> ...
>>> ddp_logging_data = model_DDP._get_ddp_logging_data()
>>> static_graph = ddp_logging_data.get("can_set_static_graph")
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
Variables:
module (Module) -- the module to be parallelized.
Example:
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...')
>>> net = torch.nn.parallel.DistributedDataParallel(model)
join(divide_by_initial_world_size=True, enable=True, throw_on_early_termination=False)
A context manager to be used in conjunction with an instance of
"torch.nn.parallel.DistributedDataParallel" to be able to train
with uneven inputs across participating processes.
This context manager will keep track of already-joined DDP
processes, and "shadow" the forward and backward passes by
inserting collective communication operations to match with the
ones created by non-joined DDP processes. This will ensure each
collective call has a corresponding call by already-joined DDP
processes, preventing hangs or errors that would otherwise
happen when training with uneven inputs across processes.
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
Alternatively, if the flag "throw_on_early_termination" is
specified to be "True", all trainers will throw an error once
one rank runs out of inputs, allowing these errors to be caught
and handled according to application logic.
Once all DDP processes have joined, the context manager will
broadcast the model corresponding to the last joined process to
all processes to ensure the model is the same across all
processes (which is guaranteed by DDP).
To use this to enable training with uneven inputs across
processes, simply wrap this context manager around your training
loop. No further modifications to the model or data loading is
required.
Warning:
If the model or training loop this context manager is wrapped
around has additional distributed collective operations, such
as "SyncBatchNorm" in the model's forward pass, then the flag
"throw_on_early_termination" must be enabled. This is because
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
this context manager is not aware of non-DDP collective
communication. This flag will cause all ranks to throw when
any one rank exhausts inputs, allowing these errors to be
caught and recovered from across all ranks.
Parameters:
* **divide_by_initial_world_size** (*bool*) -- If "True",
will divide gradients by the initial "world_size" DDP
training was launched with. If "False", will compute the
effective world size (number of ranks that have not
depleted their inputs yet) and divide gradients by that
during allreduce. Set "divide_by_initial_world_size=True"
to ensure every input sample including the uneven inputs
have equal weight in terms of how much they contribute to
the global gradient. This is achieved by always dividing
the gradient by the initial "world_size" even when we
encounter uneven inputs. If you set this to "False", we
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
divide the gradient by the remaining number of nodes. This
ensures parity with training on a smaller "world_size"
although it also means the uneven inputs would contribute
more towards the global gradient. Typically, you would want
to set this to "True" for cases where the last few inputs
of your training job are uneven. In extreme cases, where
there is a large discrepancy in the number of inputs,
setting this to "False" might provide better results.
* **enable** (*bool*) -- Whether to enable uneven input
detection or not. Pass in "enable=False" to disable in
cases where you know that inputs are even across
participating processes. Default is "True".
* **throw_on_early_termination** (*bool*) -- Whether to throw
an error or continue training when at least one rank has
exhausted inputs. If "True", will throw upon the first rank
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
reaching end of data. If "False", will continue training
with a smaller effective world size until all ranks are
joined. Note that if this flag is specified, then the flag
"divide_by_initial_world_size" would be ignored. Default is
"False".
Example:
>>> import torch
>>> import torch.distributed as dist
>>> import os
>>> import torch.multiprocessing as mp
>>> import torch.nn as nn
>>> # On each spawned worker
>>> def worker(rank):
>>> dist.init_process_group("nccl", rank=rank, world_size=2)
>>> torch.cuda.set_device(rank)
>>> model = nn.Linear(1, 1, bias=False).to(rank)
>>> model = torch.nn.parallel.DistributedDataParallel(
>>> model, device_ids=[rank], output_device=rank
>>> )
>>> # Rank 1 gets one more input than rank 0.
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
inputs = [torch.tensor([1]).float() for _ in range(10 + rank)]
>>> with model.join():
>>> for _ in range(5):
>>> for inp in inputs:
>>> loss = model(inp).sum()
>>> loss.backward()
>>> # Without the join() API, the below synchronization will hang
>>> # blocking for rank 1's allreduce to complete.
>>> torch.cuda.synchronize(device=rank)
join_hook(**kwargs)
Returns the DDP join hook, which enables training on uneven
inputs by shadowing the collective communications in the forward
and backward passes.
Parameters:
**kwargs** (*dict*) -- a "dict" containing any keyword
arguments to modify the behavior of the join hook at run
time; all "Joinable" instances sharing the same join context
manager are forwarded the same value for "kwargs".
The hook supports the following keyword arguments:
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
divide_by_initial_world_size (bool, optional):
If "True", then gradients are divided by the initial world
size that DDP was launched with. If "False", then
gradients are divided by the effective world size (i.e.
the number of non-joined processes), meaning that the
uneven inputs contribute more toward the global gradient.
Typically, this should be set to "True" if the degree of
unevenness is small but can be set to "False" in extreme
cases for possibly better results. Default is "True".
no_sync()
A context manager to disable gradient synchronizations across
DDP processes. Within this context, gradients will be
accumulated on module variables, which will later be
synchronized in the first forward-backward pass exiting the
context.
Example:
>>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg)
>>> with ddp.no_sync():
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
with ddp.no_sync():
>>> for input in inputs:
>>> ddp(input).backward() # no synchronization, accumulate grads
>>> ddp(another_input).backward() # synchronize grads
Warning:
The forward pass should be included inside the context
manager, or else gradients will still be synchronized.
register_comm_hook(state, hook)
Registers a communication hook which is an enhancement that
provides a flexible hook to users where they can specify how DDP
aggregates gradients across multiple workers.
This hook would be very useful for researchers to try out new
ideas. For example, this hook can be used to implement several
algorithms like GossipGrad and gradient compression which
involve different communication strategies for parameter syncs
while running Distributed DataParallel training.
Parameters:
* **state** (*object*) --
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
state (object) -- Passed to the hook to maintain any state information during
the training process. Examples include error feedback in
gradient compression, peers to communicate with next in
GossipGrad, etc.
It is locally stored by each worker and shared by all the
gradient tensors on the worker.
* **hook** (*Callable*) --
Callable with the following signature: "hook(state: object,
bucket: dist.GradBucket) ->
torch.futures.Future[torch.Tensor]":
This function is called once the bucket is ready. The hook
can perform whatever processing is needed and return a
Future indicating completion of any async work (ex:
allreduce). If the hook doesn't perform any communication,
it still must return a completed Future. The Future should
hold the new value of grad bucket's tensors. Once a bucket
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
is ready, c10d reducer would call this hook and use the
tensors returned by the Future and copy grads to individual
parameters. Note that the future's return type must be a
single tensor.
We also provide an API called "get_future" to retrieve a
Future associated with the completion of
"c10d.ProcessGroup.Work". "get_future" is currently
supported for NCCL and also supported for most operations
on GLOO and MPI, except for peer to peer operations
(send/recv).
Warning:
Grad bucket's tensors will not be predivided by world_size.
User is responsible to divide by the world_size in case of
operations like allreduce.
Warning:
DDP communication hook can only be registered once and should
be registered before calling backward.
Warning:
The Future object that hook returns should contain a single
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
tensor that has the same shape with the tensors inside grad
bucket.
Warning:
"get_future" API supports NCCL, and partially GLOO and MPI
backends (no support for peer-to-peer operations like
send/recv) and will return a "torch.futures.Future".
Example::
Below is an example of a noop hook that returns the same
tensor.
>>> def noop(state: object, bucket: dist.GradBucket) -> torch.futures.Future[torch.Tensor]:
>>> fut = torch.futures.Future()
>>> fut.set_result(bucket.buffer())
>>> return fut
>>> ddp.register_comm_hook(state=None, hook=noop)
Example::
Below is an example of a Parallel SGD algorithm where
gradients are encoded before allreduce, and then decoded
after allreduce.
>>> def encode_and_decode(state: object, bucket: dist.GradBucket) -> torch.futures.Future[torch.Tensor]:
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
encoded_tensor = encode(bucket.buffer()) # encode gradients
>>> fut = torch.distributed.all_reduce(encoded_tensor).get_future()
>>> # Define the then callback to decode.
>>> def decode(fut):
>>> decoded_tensor = decode(fut.value()[0]) # decode gradients
>>> return decoded_tensor
>>> return fut.then(decode)
>>> ddp.register_comm_hook(state=None, hook=encode_and_decode)
| https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html | pytorch docs |
torch.sparse_compressed_tensor
torch.sparse_compressed_tensor(compressed_indices, plain_indices, values, size=None, *, dtype=None, layout=None, device=None, requires_grad=False, check_invariants=None) -> Tensor
Constructs a sparse tensor in Compressed Sparse format - CSR, CSC,
BSR, or BSC - with specified values at the given
"compressed_indices" and "plain_indices". Sparse matrix
multiplication operations in Compressed Sparse format are typically
faster than that for sparse tensors in COO format. Make you have a
look at the note on the data type of the indices.
Note:
If the "device" argument is not specified the device of the given
"values" and indices tensor(s) must match. If, however, the
argument is specified the input Tensors will be converted to the
given device and in turn determine the device of the constructed
sparse tensor.
Parameters:
* compressed_indices (array_like) -- (B+1)-dimensional | https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html | pytorch docs |
array of size "(*batchsize, compressed_dim_size + 1)". The
last element of each batch is the number of non-zero elements
or blocks. This tensor encodes the index in "values" and
"plain_indices" depending on where the given compressed
dimension (row or column) starts. Each successive number in
the tensor subtracted by the number before it denotes the
number of elements or blocks in a given compressed dimension.
* **plain_indices** (*array_like*) -- Plain dimension (column or
row) co-ordinates of each element or block in values.
(B+1)-dimensional tensor with the same length as values.
* **values** (*array_list*) -- Initial values for the tensor.
Can be a list, tuple, NumPy "ndarray", scalar, and other
types. that represents a (1+K)-dimensional (for CSR and CSC
layouts) or (1+2+K)-dimensional tensor (for BSR and BSC
layouts) where "K" is the number of dense dimensions.
| https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html | pytorch docs |
size (list, tuple, "torch.Size", optional) -- Size of the
sparse tensor: "(batchsize, nrows * blocksize[0], ncols *
blocksize[1], densesize)" where "blocksize[0] == blocksize[1]
== 1" for CSR and CSC formats. If not provided, the size will
be inferred as the minimum size big enough to hold all non-
zero elements or blocks.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if None, infers data type from
"values".
* **layout** ("torch.layout", required) -- the desired layout of
returned tensor: "torch.sparse_csr", "torch.sparse_csc",
"torch.sparse_bsr", or "torch.sparse_bsc".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
| https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html | pytorch docs |
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **check_invariants** (*bool**, **optional*) -- If sparse
tensor invariants are checked. Default: as returned by
"torch.sparse.check_sparse_tensor_invariants.is_enabled()",
initially False.
Example::
>>> compressed_indices = [0, 2, 4]
>>> plain_indices = [0, 1, 0, 1]
>>> values = [1, 2, 3, 4]
>>> torch.sparse_compressed_tensor(torch.tensor(compressed_indices, dtype=torch.int64),
... torch.tensor(plain_indices, dtype=torch.int64),
... torch.tensor(values), dtype=torch.double, layout=torch.sparse_csr)
tensor(crow_indices=tensor([0, 2, 4]),
col_indices=tensor([0, 1, 0, 1]),
values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4, | https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html | pytorch docs |
dtype=torch.float64, layout=torch.sparse_csr) | https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html | pytorch docs |
InstanceNorm3d
class torch.nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)
Applies Instance Normalization over a 5D input (a mini-batch of 3D
inputs with additional channel dimension) as described in the paper
Instance Normalization: The Missing Ingredient for Fast
Stylization.
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The mean and standard-deviation are calculated per-dimension
separately for each object in a mini-batch. \gamma and \beta are
learnable parameter vectors of size C (where C is the input size)
if "affine" is "True". The standard-deviation is calculated via the
biased estimator, equivalent to torch.var(input, unbiased=False).
By default, this layer uses instance statistics computed from input
data in both training and evaluation modes.
If "track_running_stats" is set to "True", during training this | https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html | pytorch docs |
layer keeps running estimates of its computed mean and variance,
which are then used for normalization during evaluation. The
running estimates are kept with a default "momentum" of 0.1.
Note:
This "momentum" argument is different from one used in optimizer
classes and the conventional notion of momentum. Mathematically,
the update rule for running statistics here is \hat{x}_\text{new}
= (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times
x_t, where \hat{x} is the estimated statistic and x_t is the new
observed value.
Note:
"InstanceNorm3d" and "LayerNorm" are very similar, but have some
subtle differences. "InstanceNorm3d" is applied on each channel
of channeled data like 3D models with RGB color, but "LayerNorm"
is usually applied on entire sample and often in NLP tasks.
Additionally, "LayerNorm" applies elementwise affine transform,
while "InstanceNorm3d" usually don't apply affine transform.
Parameters: | https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html | pytorch docs |
Parameters:
* num_features (int) -- C from an expected input of size
(N, C, D, H, W) or (C, D, H, W)
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters,
initialized the same way as done for batch normalization.
Default: "False".
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics and always uses batch statistics in both
training and eval modes. Default: "False"
Shape:
* Input: (N, C, D, H, W) or (C, D, H, W)
* Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input)
| https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html | pytorch docs |
Examples:
>>> # Without Learnable Parameters
>>> m = nn.InstanceNorm3d(100)
>>> # With Learnable Parameters
>>> m = nn.InstanceNorm3d(100, affine=True)
>>> input = torch.randn(20, 100, 35, 45, 10)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html | pytorch docs |
torch.nn.functional.bilinear
torch.nn.functional.bilinear(input1, input2, weight, bias=None) -> Tensor
Applies a bilinear transformation to the incoming data: y = x_1^T A
x_2 + b
Shape:
* input1: (N, *, H_{in1}) where H_{in1}=\text{in1\_features} and
* means any number of additional dimensions. All but the last
dimension of the inputs should be the same.
* input2: (N, *, H_{in2}) where H_{in2}=\text{in2\_features}
* weight: (\text{out\_features}, \text{in1\_features},
\text{in2\_features})
* bias: (\text{out\_features})
* output: (N, *, H_{out}) where H_{out}=\text{out\_features} and
all but the last dimension are the same shape as the input.
| https://pytorch.org/docs/stable/generated/torch.nn.functional.bilinear.html | pytorch docs |
torch.Tensor.bitwise_not
Tensor.bitwise_not() -> Tensor
See "torch.bitwise_not()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_not.html | pytorch docs |
torch.linalg.householder_product
torch.linalg.householder_product(A, tau, *, out=None) -> Tensor
Computes the first n columns of a product of Householder
matrices.
Let \mathbb{K} be \mathbb{R} or \mathbb{C}, and let V \in
\mathbb{K}^{m \times n} be a matrix with columns v_i \in
\mathbb{K}^m for i=1,\ldots,m with m \geq n. Denote by w_i the
vector resulting from zeroing out the first i-1 components of v_i
and setting to 1 the i-th. For a vector \tau \in \mathbb{K}^k
with k \leq n, this function computes the first n columns of the
matrix
H_1H_2 ... H_k \qquad\text{with}\qquad H_i = \mathrm{I}_m -
\tau_i w_i w_i^{\text{H}}
where \mathrm{I}_m is the m-dimensional identity matrix and
w^{\text{H}} is the conjugate transpose when w is complex, and the
transpose when w is real-valued. The output matrix is the same size
as the input matrix "A".
See Representation of Orthogonal or Unitary Matrices for further
details. | https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html | pytorch docs |
details.
Supports inputs of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if the inputs are batches of
matrices then the output has the same batch dimensions.
See also:
"torch.geqrf()" can be used together with this function to form
the *Q* from the "qr()" decomposition.
"torch.ormqr()" is a related function that computes the matrix
multiplication of a product of Householder matrices with another
matrix. However, that function is not supported by autograd.
Warning:
Gradient computations are only well-defined if tau_i \neq
\frac{1}{||v_i||^2}. If this condition is not met, no error will
be thrown, but the gradient produced may contain *NaN*.
Parameters:
* A (Tensor) -- tensor of shape (, m, n)* where *** is
zero or more batch dimensions.
* **tau** (*Tensor*) -- tensor of shape *(*, k)* where *** is
zero or more batch dimensions.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Raises:
RuntimeError -- if "A" doesn't satisfy the requirement m >=
n, or "tau" doesn't satisfy the requirement n >= k.
Examples:
>>> A = torch.randn(2, 2)
>>> h, tau = torch.geqrf(A)
>>> Q = torch.linalg.householder_product(h, tau)
>>> torch.dist(Q, torch.linalg.qr(A).Q)
tensor(0.)
>>> h = torch.randn(3, 2, 2, dtype=torch.complex128)
>>> tau = torch.randn(3, 1, dtype=torch.complex128)
>>> Q = torch.linalg.householder_product(h, tau)
>>> Q
tensor([[[ 1.8034+0.4184j, 0.2588-1.0174j],
[-0.6853+0.7953j, 2.0790+0.5620j]],
[[ 1.4581+1.6989j, -1.5360+0.1193j],
[ 1.3877-0.6691j, 1.3512+1.3024j]],
[[ 1.4766+0.5783j, 0.0361+0.6587j],
[ 0.6396+0.1612j, 1.3693+0.4481j]]], dtype=torch.complex128)
| https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html | pytorch docs |
torch.set_num_interop_threads
torch.set_num_interop_threads(int)
Sets the number of threads used for interop parallelism (e.g. in
JIT interpreter) on CPU.
Warning:
Can only be called once and before any inter-op parallel work is
started (e.g. JIT execution).
| https://pytorch.org/docs/stable/generated/torch.set_num_interop_threads.html | pytorch docs |
torch.stack
torch.stack(tensors, dim=0, *, out=None) -> Tensor
Concatenates a sequence of tensors along a new dimension.
All tensors need to be of the same size.
Parameters:
* tensors (sequence of Tensors) -- sequence of tensors to
concatenate
* **dim** (*int*) -- dimension to insert. Has to be between 0
and the number of dimensions of concatenated tensors
(inclusive)
Keyword Arguments:
out (Tensor, optional) -- the output tensor. | https://pytorch.org/docs/stable/generated/torch.stack.html | pytorch docs |
torch.Tensor.multiply_
Tensor.multiply_(value) -> Tensor
In-place version of "multiply()". | https://pytorch.org/docs/stable/generated/torch.Tensor.multiply_.html | pytorch docs |
torch.nextafter
torch.nextafter(input, other, *, out=None) -> Tensor
Return the next floating-point value after "input" towards "other",
elementwise.
The shapes of "input" and "other" must be broadcastable.
Parameters:
* input (Tensor) -- the first input tensor
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> eps = torch.finfo(torch.float32).eps
>>> torch.nextafter(torch.tensor([1.0, 2.0]), torch.tensor([2.0, 1.0])) == torch.tensor([eps + 1, 2 - eps])
tensor([True, True])
| https://pytorch.org/docs/stable/generated/torch.nextafter.html | pytorch docs |
torch.Tensor.fmod
Tensor.fmod(divisor) -> Tensor
See "torch.fmod()" | https://pytorch.org/docs/stable/generated/torch.Tensor.fmod.html | pytorch docs |
torch.Tensor.log
Tensor.log() -> Tensor
See "torch.log()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log.html | pytorch docs |
torch.Tensor.bitwise_or_
Tensor.bitwise_or_() -> Tensor
In-place version of "bitwise_or()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_or_.html | pytorch docs |
torch.Tensor.baddbmm_
Tensor.baddbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor
In-place version of "baddbmm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.baddbmm_.html | pytorch docs |
torch.fft.irfft2
torch.fft.irfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor
Computes the inverse of "rfft2()". Equivalent to "irfftn()" but
IFFTs only the last two dimensions by default.
"input" is interpreted as a one-sided Hermitian signal in the
Fourier domain, as produced by "rfft2()". By the Hermitian
property, the output will be real-valued.
Note:
Some input frequencies must be real-valued to satisfy the
Hermitian property. In these cases the imaginary component will
be ignored. For example, any imaginary component in the zero-
frequency term cannot be represented in a real output and so will
always be ignored.
Note:
The correct interpretation of the Hermitian input depends on the
length of the original data, as given by "s". This is because
each input shape could correspond to either an odd or even length
signal. By default, the signal is assumed to be even length and
| https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html | pytorch docs |
odd signals will not round-trip properly. So, it is recommended
to always pass the signal shape "s".
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimensions. With default arguments,
the size of last dimension should be (2^n + 1) as argument *s*
defaults to even output size = 2 * (last_dim_size - 1)
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the real FFT. If a length "-1" is specified, no
padding is done in that dimension. Defaults to even output in
the last dimension: "s[-1] = 2*(input.size(dim[-1]) - 1)".
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
| https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html | pytorch docs |
transformed. The last dimension must be the half-Hermitian
compressed dimension. Default: last two dimensions.
* **norm** (*str**, **optional*) --
Normalization mode. For the backward transform ("irfft2()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the real IFFT
orthonormal)
Where "n = prod(s)" is the logical IFFT size. Calling the
forward transform ("rfft2()") with the same normalization mode
will apply an overall normalization of "1/n" between the two
transforms. This is required to make "irfft2()" the exact
inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
t = torch.rand(10, 9)
T = torch.fft.rfft2(t)
| https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html | pytorch docs |
T = torch.fft.rfft2(t)
Without specifying the output length to "irfft2()", the output will
not round-trip properly because the input is odd-length in the last
dimension:
torch.fft.irfft2(T).size()
torch.Size([10, 8])
So, it is recommended to always pass the signal shape "s".
roundtrip = torch.fft.irfft2(T, t.size())
roundtrip.size()
torch.Size([10, 9])
torch.testing.assert_close(roundtrip, t, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html | pytorch docs |
torch._foreach_ceil
torch._foreach_ceil(self: List[Tensor]) -> List[Tensor]
Apply "torch.ceil()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_ceil.html | pytorch docs |
torch.var_mean
torch.var_mean(input, dim=None, *, correction=1, keepdim=False, out=None)
Calculates the variance and mean over the dimensions specified by
"dim". "dim" can be a single dimension, list of dimensions, or
"None" to reduce over all dimensions.
The variance (\sigma^2) is calculated as
\sigma^2 = \frac{1}{N - \delta N}\sum_{i=0}^{N-1}(x_i-\bar{x})^2
where x is the sample set of elements, \bar{x} is the sample mean,
N is the number of samples and \delta N is the "correction".
If "keepdim" is "True", the output tensor is of the same size as
"input" except in the dimension(s) "dim" where it is of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the output tensor having 1 (or "len(dim)") fewer dimension(s).
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int** or **tuple of ints**, **optional*) -- the
dimension or dimensions to reduce. If "None", all dimensions
| https://pytorch.org/docs/stable/generated/torch.var_mean.html | pytorch docs |
are reduced.
Keyword Arguments:
* correction (int) --
difference between the sample size and sample degrees of
freedom. Defaults to Bessel's correction, "correction=1".
Changed in version 2.0: Previously this argument was called
"unbiased" and was a boolean with "True" corresponding to
"correction=1" and "False" being "correction=0".
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
* **out** (*Tensor**, **optional*) -- the output tensor.
Returns:
A tuple (var, mean) containing the variance and mean.
-[ Example ]-
a = torch.tensor(
... [[ 0.2035, 1.2959, 1.8101, -0.4644],
... [ 1.5027, -0.3270, 0.5905, 0.6538],
... [-1.5745, 1.3330, -0.5596, -0.6548],
... [ 0.1264, -0.5080, 1.6420, 0.1992]])
torch.var_mean(a, dim=0, keepdim=True)
(tensor([[1.5926, 1.0056, 1.2005, 0.3646]]),
tensor([[ 0.0645, 0.4485, 0.8707, -0.0665]]))
| https://pytorch.org/docs/stable/generated/torch.var_mean.html | pytorch docs |
torch.Tensor.resize_
Tensor.resize_(*sizes, memory_format=torch.contiguous_format) -> Tensor
Resizes "self" tensor to the specified size. If the number of
elements is larger than the current storage size, then the
underlying storage is resized to fit the new number of elements. If
the number of elements is smaller, the underlying storage is not
changed. Existing elements are preserved but any new memory is
uninitialized.
Warning:
This is a low-level method. The storage is reinterpreted as
C-contiguous, ignoring the current strides (unless the target
size equals the current size, in which case the tensor is left
unchanged). For most purposes, you will instead want to use
"view()", which checks for contiguity, or "reshape()", which
copies data if needed. To change the size in-place with custom
strides, see "set_()".
Parameters:
* sizes (torch.Size or int...) -- the desired size | https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html | pytorch docs |
memory_format ("torch.memory_format", optional) -- the
desired memory format of Tensor. Default:
"torch.contiguous_format". Note that memory format of "self"
is going to be unaffected if "self.size()" matches "sizes".
Example:
>>> x = torch.tensor([[1, 2], [3, 4], [5, 6]])
>>> x.resize_(2, 2)
tensor([[ 1, 2],
[ 3, 4]])
| https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html | pytorch docs |
torch.nn.functional.lp_pool2d
torch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)
Applies a 2D power-average pooling over an input signal composed of
several input planes. If the sum of all inputs to the power of p
is zero, the gradient is set to zero as well.
See "LPPool2d" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.lp_pool2d.html | pytorch docs |
torch.sparse_bsr_tensor
torch.sparse_bsr_tensor(crow_indices, col_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor
Constructs a sparse tensor in BSR (Block Compressed Sparse Row))
with specified 2-dimensional blocks at the given "crow_indices" and
"col_indices". Sparse matrix multiplication operations in BSR
format are typically faster than that for sparse tensors in COO
format. Make you have a look at the note on the data type of the
indices.
Note:
If the "device" argument is not specified the device of the given
"values" and indices tensor(s) must match. If, however, the
argument is specified the input Tensors will be converted to the
given device and in turn determine the device of the constructed
sparse tensor.
Parameters:
* crow_indices (array_like) -- (B+1)-dimensional array of
size "(*batchsize, nrowblocks + 1)". The last element of each | https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html | pytorch docs |
batch is the number of non-zeros. This tensor encodes the
block index in values and col_indices depending on where the
given row block starts. Each successive number in the tensor
subtracted by the number before it denotes the number of
blocks in a given row.
* **col_indices** (*array_like*) -- Column block co-ordinates of
each block in values. (B+1)-dimensional tensor with the same
length as values.
* **values** (*array_list*) -- Initial values for the tensor.
Can be a list, tuple, NumPy "ndarray", scalar, and other types
that represents a (1 + 2 + K)-dimensional tensor where "K" is
the number of dense dimensions.
* **size** (list, tuple, "torch.Size", optional) -- Size of the
sparse tensor: "(*batchsize, nrows * blocksize[0], ncols *
blocksize[1], *densesize)" where "blocksize ==
values.shape[1:3]". If not provided, the size will be inferred
| https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html | pytorch docs |
as the minimum size big enough to hold all non-zero blocks.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if None, infers data type from
"values".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **check_invariants** (*bool**, **optional*) -- If sparse
tensor invariants are checked. Default: as returned by
"torch.sparse.check_sparse_tensor_invariants.is_enabled()",
initially False.
Example::
>>> crow_indices = [0, 1, 2] | https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html | pytorch docs |
Example::
>>> crow_indices = [0, 1, 2]
>>> col_indices = [0, 1]
>>> values = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
>>> torch.sparse_bsr_tensor(torch.tensor(crow_indices, dtype=torch.int64),
... torch.tensor(col_indices, dtype=torch.int64),
... torch.tensor(values), dtype=torch.double)
tensor(crow_indices=tensor([0, 1, 2]),
col_indices=tensor([0, 1]),
values=tensor([[[1., 2.],
[3., 4.]],
[[5., 6.],
[7., 8.]]]), size=(2, 2), nnz=2, dtype=torch.float64,
layout=torch.sparse_bsr) | https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html | pytorch docs |
LogSoftmax
class torch.nn.LogSoftmax(dim=None)
Applies the \log(\text{Softmax}(x)) function to an n-dimensional
input Tensor. The LogSoftmax formulation can be simplified as:
\text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j
\exp(x_j)} \right)
Shape:
* Input: (*) where *** means, any number of additional
dimensions
* Output: (*), same shape as the input
Parameters:
dim (int) -- A dimension along which LogSoftmax will be
computed.
Returns:
a Tensor of the same dimension and shape as the input with
values in the range [-inf, 0)
Return type:
None
Examples:
>>> m = nn.LogSoftmax(dim=1)
>>> input = torch.randn(2, 3)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html | pytorch docs |
torch.func.jvp
torch.func.jvp(func, primals, tangents, *, strict=False, has_aux=False)
Standing for the Jacobian-vector product, returns a tuple
containing the output of func(primals)* and the "Jacobian of
"func" evaluated at "primals"" times "tangents". This is also known
as forward-mode autodiff.
Parameters:
* func (function) -- A Python function that takes one or
more arguments, one of which must be a Tensor, and returns one
or more Tensors
* **primals** (*Tensors*) -- Positional arguments to "func" that
must all be Tensors. The returned function will also be
computing the derivative with respect to these arguments
* **tangents** (*Tensors*) -- The "vector" for which Jacobian-
vector-product is computed. Must be the same structure and
sizes as the inputs to "func".
* **has_aux** (*bool*) -- Flag indicating that "func" returns a
| https://pytorch.org/docs/stable/generated/torch.func.jvp.html | pytorch docs |
"(output, aux)" tuple where the first element is the output of
the function to be differentiated and the second element is
other auxiliary objects that will not be differentiated.
Default: False.
Returns:
Returns a "(output, jvp_out)" tuple containing the output of
"func" evaluated at "primals" and the Jacobian-vector product.
If "has_aux is True", then instead returns a "(output, jvp_out,
aux)" tuple.
Note:
You may see this API error out with "forward-mode AD not
implemented for operator X". If so, please file a bug report and
we will prioritize it.
jvp is useful when you wish to compute gradients of a function R^1
-> R^N
from torch.func import jvp
x = torch.randn([])
f = lambda x: x * torch.tensor([1., 2., 3])
value, grad = jvp(f, (x,), (torch.tensor(1.),))
assert torch.allclose(value, f(x))
assert torch.allclose(grad, torch.tensor([1., 2, 3]))
| https://pytorch.org/docs/stable/generated/torch.func.jvp.html | pytorch docs |
"jvp()" can support functions with multiple inputs by passing in
the tangents for each of the inputs
from torch.func import jvp
x = torch.randn(5)
y = torch.randn(5)
f = lambda x, y: (x * y)
_, output = jvp(f, (x, y), (torch.ones(5), torch.ones(5)))
assert torch.allclose(output, x + y)
| https://pytorch.org/docs/stable/generated/torch.func.jvp.html | pytorch docs |
torch.angle
torch.angle(input, *, out=None) -> Tensor
Computes the element-wise angle (in radians) of the given "input"
tensor.
\text{out}_{i} = angle(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Note:
Starting in PyTorch 1.8, angle returns pi for negative real
numbers, zero for non-negative real numbers, and propagates NaNs.
Previously the function would return zero for all real numbers
and not propagate floating-point NaNs.
Example:
>>> torch.angle(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))*180/3.14159
tensor([ 135., 135, -45])
| https://pytorch.org/docs/stable/generated/torch.angle.html | pytorch docs |
torch.flipud
torch.flipud(input) -> Tensor
Flip tensor in the up/down direction, returning a new tensor.
Flip the entries in each column in the up/down direction. Rows are
preserved, but appear in a different order than before.
Note:
Requires the tensor to be at least 1-D.
Note:
*torch.flipud* makes a copy of "input"'s data. This is different
from NumPy's *np.flipud*, which returns a view in constant time.
Since copying a tensor's data is more work than viewing that
data, *torch.flipud* is expected to be slower than *np.flipud*.
Parameters:
input (Tensor) -- Must be at least 1-dimensional.
Example:
>>> x = torch.arange(4).view(2, 2)
>>> x
tensor([[0, 1],
[2, 3]])
>>> torch.flipud(x)
tensor([[2, 3],
[0, 1]])
| https://pytorch.org/docs/stable/generated/torch.flipud.html | pytorch docs |
torch.foreach_abs
torch.foreach_abs(self: List[Tensor]) -> None
Apply "torch.abs()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_abs_.html | pytorch docs |
torch.cartesian_prod
torch.cartesian_prod(*tensors)
Do cartesian product of the given sequence of tensors. The behavior
is similar to python's itertools.product.
Parameters:
tensors (Tensor*) -- any number of 1 dimensional tensors.
Returns:
A tensor equivalent to converting all the input tensors into
lists, do itertools.product on these lists, and finally
convert the resulting list into tensor.
Return type:
Tensor
Example:
>>> import itertools
>>> a = [1, 2, 3]
>>> b = [4, 5]
>>> list(itertools.product(a, b))
[(1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5)]
>>> tensor_a = torch.tensor(a)
>>> tensor_b = torch.tensor(b)
>>> torch.cartesian_prod(tensor_a, tensor_b)
tensor([[1, 4],
[1, 5],
[2, 4],
[2, 5],
[3, 4],
[3, 5]])
| https://pytorch.org/docs/stable/generated/torch.cartesian_prod.html | pytorch docs |
BNReLU2d
class torch.ao.nn.intrinsic.BNReLU2d(batch_norm, relu)
This is a sequential container which calls the BatchNorm 2d and
ReLU modules. During quantization this will be replaced with the
corresponding fused module. | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.BNReLU2d.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.