text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.gather
torch.gather(input, dim, index, *, sparse_grad=False, out=None) -> Tensor
Gathers values along an axis specified by dim.
For a 3-D tensor the output is specified by:
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0
out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1
out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
"input" and "index" must have the same number of dimensions. It is
also required that "index.size(d) <= input.size(d)" for all
dimensions "d != dim". "out" will have the same shape as "index".
Note that "input" and "index" do not broadcast against each other.
Parameters:
* input (Tensor) -- the source tensor
* **dim** (*int*) -- the axis along which to index
* **index** (*LongTensor*) -- the indices of elements to gather
Keyword Arguments:
* sparse_grad (bool, optional) -- If "True", gradient
w.r.t. "input" will be a sparse tensor. | https://pytorch.org/docs/stable/generated/torch.gather.html | pytorch docs |
w.r.t. "input" will be a sparse tensor.
* **out** (*Tensor**, **optional*) -- the destination tensor
Example:
>>> t = torch.tensor([[1, 2], [3, 4]])
>>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]]))
tensor([[ 1, 1],
[ 4, 3]])
| https://pytorch.org/docs/stable/generated/torch.gather.html | pytorch docs |
torch.quantize_per_channel
torch.quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor
Converts a float tensor to a per-channel quantized tensor with
given scales and zero points.
Parameters:
* input (Tensor) -- float tensor to quantize
* **scales** (*Tensor*) -- float 1D tensor of scales to use,
size should match "input.size(axis)"
* **zero_points** (*int*) -- integer 1D tensor of offset to use,
size should match "input.size(axis)"
* **axis** (*int*) -- dimension on which apply per-channel
quantization
* **dtype** ("torch.dtype") -- the desired data type of returned
tensor. Has to be one of the quantized dtypes: "torch.quint8",
"torch.qint8", "torch.qint32"
Returns:
A newly quantized tensor
Return type:
Tensor
Example:
>>> x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])
| https://pytorch.org/docs/stable/generated/torch.quantize_per_channel.html | pytorch docs |
torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8)
tensor([[-1., 0.],
[ 1., 2.]], size=(2, 2), dtype=torch.quint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.1000, 0.0100], dtype=torch.float64),
zero_point=tensor([10, 0]), axis=0)
>>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr()
tensor([[ 0, 10],
[100, 200]], dtype=torch.uint8)
| https://pytorch.org/docs/stable/generated/torch.quantize_per_channel.html | pytorch docs |
torch.signal.windows.blackman
torch.signal.windows.blackman(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)
Computes the Blackman window.
The Blackman window is defined as follows:
w_n = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{M - 1} \right) +
0.08 \cos \left( \frac{4 \pi n}{M - 1} \right)
The window is normalized to 1 (maximum value is 1). However, the 1
doesn't appear if "M" is even and "sym" is True.
Parameters:
M (int) -- the length of the window. In other words, the
number of points of the returned window.
Keyword Arguments:
* sym (bool, optional) -- If False, returns a
periodic window suitable for use in spectral analysis. If
True, returns a symmetric window suitable for use in filter
design. Default: True.
* **dtype** ("torch.dtype", optional) -- the desired data type
| https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html | pytorch docs |
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
Return type:
Tensor
Examples:
>>> # Generates a symmetric Blackman window.
>>> torch.signal.windows.blackman(5)
tensor([-1.4901e-08, 3.4000e-01, 1.0000e+00, 3.4000e-01, -1.4901e-08])
>>> # Generates a periodic Blackman window.
| https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html | pytorch docs |
Generates a periodic Blackman window.
>>> torch.signal.windows.blackman(5, sym=False)
tensor([-1.4901e-08, 2.0077e-01, 8.4923e-01, 8.4923e-01, 2.0077e-01])
| https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html | pytorch docs |
torch.nn.functional.softsign
torch.nn.functional.softsign(input) -> Tensor
Applies element-wise, the function \text{SoftSign}(x) = \frac{x}{1
+ |x|}
See "Softsign" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.softsign.html | pytorch docs |
Event
class torch.cuda.Event(enable_timing=False, blocking=False, interprocess=False)
Wrapper around a CUDA event.
CUDA events are synchronization markers that can be used to monitor
the device's progress, to accurately measure timing, and to
synchronize CUDA streams.
The underlying CUDA events are lazily initialized when the event is
first recorded or exported to another process. After creation, only
streams on the same device may record the event. However, streams
on any device can wait on the event.
Parameters:
* enable_timing (bool, optional) -- indicates if the
event should measure time (default: "False")
* **blocking** (*bool**, **optional*) -- if "True", "wait()"
will be blocking (default: "False")
* **interprocess** (*bool*) -- if "True", the event can be
shared between processes (default: "False")
elapsed_time(end_event)
Returns the time elapsed in milliseconds after the event was
| https://pytorch.org/docs/stable/generated/torch.cuda.Event.html | pytorch docs |
recorded and before the end_event was recorded.
classmethod from_ipc_handle(device, handle)
Reconstruct an event from an IPC handle on the given device.
ipc_handle()
Returns an IPC handle of this event. If not recorded yet, the
event will use the current device.
query()
Checks if all work currently captured by event has completed.
Returns:
A boolean indicating if all work currently captured by event
has completed.
record(stream=None)
Records the event in a given stream.
Uses "torch.cuda.current_stream()" if no stream is specified.
The stream's device must match the event's device.
synchronize()
Waits for the event to complete.
Waits until the completion of all work currently captured in
this event. This prevents the CPU thread from proceeding until
the event completes.
Note:
This is a wrapper around "cudaEventSynchronize()": see CUDA
| https://pytorch.org/docs/stable/generated/torch.cuda.Event.html | pytorch docs |
Event documentation for more info.
wait(stream=None)
Makes all future work submitted to the given stream wait for
this event.
Use "torch.cuda.current_stream()" if no stream is specified.
Note:
This is a wrapper around "cudaStreamWaitEvent()": see CUDA
Event documentation for more info.
| https://pytorch.org/docs/stable/generated/torch.cuda.Event.html | pytorch docs |
torch.argsort
torch.argsort(input, dim=- 1, descending=False, stable=False) -> Tensor
Returns the indices that sort a tensor along a given dimension in
ascending order by value.
This is the second value returned by "torch.sort()". See its
documentation for the exact semantics of this method.
If "stable" is "True" then the sorting routine becomes stable,
preserving the order of equivalent elements. If "False", the
relative order of values which compare equal is not guaranteed.
"True" is slower.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int**, **optional*) -- the dimension to sort along
* **descending** (*bool**, **optional*) -- controls the sorting
order (ascending or descending)
* **stable** (*bool**, **optional*) -- controls the relative
order of equivalent elements
Example:
>>> a = torch.randn(4, 4)
>>> a
tensor([[ 0.0785, 1.5267, -0.8521, 0.4065],
| https://pytorch.org/docs/stable/generated/torch.argsort.html | pytorch docs |
[ 0.1598, 0.0788, -0.0745, -1.2700],
[ 1.2208, 1.0722, -0.7064, 1.2564],
[ 0.0669, -0.2318, -0.8229, -0.9280]])
>>> torch.argsort(a, dim=1)
tensor([[2, 0, 3, 1],
[3, 2, 1, 0],
[2, 1, 0, 3],
[3, 2, 1, 0]])
| https://pytorch.org/docs/stable/generated/torch.argsort.html | pytorch docs |
torch.is_grad_enabled
torch.is_grad_enabled()
Returns True if grad mode is currently enabled. | https://pytorch.org/docs/stable/generated/torch.is_grad_enabled.html | pytorch docs |
CosineEmbeddingLoss
class torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')
Creates a criterion that measures the loss given input tensors x_1,
x_2 and a Tensor label y with values 1 or -1. This is used for
measuring whether two inputs are similar or dissimilar, using the
cosine similarity, and is typically used for learning nonlinear
embeddings or semi-supervised learning.
The loss function for each sample is:
\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if
} y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if }
y = -1 \end{cases}
Parameters:
* margin (float, optional) -- Should be a number from
-1 to 1, 0 to 0.5 is suggested. If "margin" is missing, the
default value is 0.
* **size_average** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged over each
| https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html | pytorch docs |
loss element in the batch. Note that for some losses, there
are multiple elements per sample. If the field "size_average"
is set to "False", the losses are instead summed for each
minibatch. Ignored when "reduce" is "False". Default: "True"
* **reduce** (*bool**, **optional*) -- Deprecated (see
"reduction"). By default, the losses are averaged or summed
over observations for each minibatch depending on
"size_average". When "reduce" is "False", returns a loss per
batch element instead and ignores "size_average". Default:
"True"
* **reduction** (*str**, **optional*) -- Specifies the reduction
to apply to the output: "'none'" | "'mean'" | "'sum'".
"'none'": no reduction will be applied, "'mean'": the sum of
the output will be divided by the number of elements in the
output, "'sum'": the output will be summed. Note:
"size_average" and "reduce" are in the process of being
| https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html | pytorch docs |
deprecated, and in the meantime, specifying either of those
two args will override "reduction". Default: "'mean'"
Shape:
* Input1: (N, D) or (D), where N is the batch size and D is
the embedding dimension.
* Input2: (N, D) or (D), same shape as Input1.
* Target: (N) or ().
* Output: If "reduction" is "'none'", then (N), otherwise
scalar.
| https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html | pytorch docs |
torch.arccosh
torch.arccosh(input, *, out=None) -> Tensor
Alias for "torch.acosh()". | https://pytorch.org/docs/stable/generated/torch.arccosh.html | pytorch docs |
ConvReLU2d
class torch.ao.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)
A ConvReLU2d module is a fused module of Conv2d and ReLU, attached
with FakeQuantize modules for weight for quantization aware
training.
We combined the interface of "Conv2d" and "BatchNorm2d".
Variables:
weight_fake_quant -- fake quant module for weight | https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvReLU2d.html | pytorch docs |
torch.vstack
torch.vstack(tensors, *, out=None) -> Tensor
Stack tensors in sequence vertically (row wise).
This is equivalent to concatenation along the first axis after all
1-D tensors have been reshaped by "torch.atleast_2d()".
Parameters:
tensors (sequence of Tensors) -- sequence of tensors to
concatenate
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor([1, 2, 3])
>>> b = torch.tensor([4, 5, 6])
>>> torch.vstack((a,b))
tensor([[1, 2, 3],
[4, 5, 6]])
>>> a = torch.tensor([[1],[2],[3]])
>>> b = torch.tensor([[4],[5],[6]])
>>> torch.vstack((a,b))
tensor([[1],
[2],
[3],
[4],
[5],
[6]])
| https://pytorch.org/docs/stable/generated/torch.vstack.html | pytorch docs |
torch.flip
torch.flip(input, dims) -> Tensor
Reverse the order of an n-D tensor along given axis in dims.
Note:
*torch.flip* makes a copy of "input"'s data. This is different
from NumPy's *np.flip*, which returns a view in constant time.
Since copying a tensor's data is more work than viewing that
data, *torch.flip* is expected to be slower than *np.flip*.
Parameters:
* input (Tensor) -- the input tensor.
* **dims** (*a list** or **tuple*) -- axis to flip on
Example:
>>> x = torch.arange(8).view(2, 2, 2)
>>> x
tensor([[[ 0, 1],
[ 2, 3]],
[[ 4, 5],
[ 6, 7]]])
>>> torch.flip(x, [0, 1])
tensor([[[ 6, 7],
[ 4, 5]],
[[ 2, 3],
[ 0, 1]]])
| https://pytorch.org/docs/stable/generated/torch.flip.html | pytorch docs |
torch.Tensor.frexp
Tensor.frexp(input) -> (Tensor mantissa, Tensor exponent)
See "torch.frexp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.frexp.html | pytorch docs |
torch.median
torch.median(input) -> Tensor
Returns the median of the values in "input".
Note:
The median is not unique for "input" tensors with an even number
of elements. In this case the lower of the two medians is
returned. To compute the mean of both medians, use
"torch.quantile()" with "q=0.5" instead.
Warning:
This function produces deterministic (sub)gradients unlike
"median(dim=0)"
Parameters:
input (Tensor) -- the input tensor.
Example:
>>> a = torch.randn(1, 3)
>>> a
tensor([[ 1.5219, -1.5212, 0.2202]])
>>> torch.median(a)
tensor(0.2202)
torch.median(input, dim=- 1, keepdim=False, *, out=None)
Returns a namedtuple "(values, indices)" where "values" contains
the median of each row of "input" in the dimension "dim", and
"indices" contains the index of the median values found in the
dimension "dim".
By default, "dim" is the last dimension of the "input" tensor. | https://pytorch.org/docs/stable/generated/torch.median.html | pytorch docs |
If "keepdim" is "True", the output tensors are of the same size as
"input" except in the dimension "dim" where they are of size 1.
Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
the outputs tensor having 1 fewer dimension than "input".
Note:
The median is not unique for "input" tensors with an even number
of elements in the dimension "dim". In this case the lower of the
two medians is returned. To compute the mean of both medians in
"input", use "torch.quantile()" with "q=0.5" instead.
Warning:
"indices" does not necessarily contain the first occurrence of
each median value found, unless it is unique. The exact
implementation details are device-specific. Do not expect the
same result when run on CPU and GPU in general. For the same
reason do not expect the gradients to be deterministic.
Parameters:
* input (Tensor) -- the input tensor.
* **dim** (*int*) -- the dimension to reduce.
| https://pytorch.org/docs/stable/generated/torch.median.html | pytorch docs |
keepdim (bool) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
out ((Tensor, Tensor), optional) -- The first
tensor will be populated with the median values and the second
tensor, which must have dtype long, with their indices in the
dimension "dim" of "input".
Example:
>>> a = torch.randn(4, 5)
>>> a
tensor([[ 0.2505, -0.3982, -0.9948, 0.3518, -1.3131],
[ 0.3180, -0.6993, 1.0436, 0.0438, 0.2270],
[-0.2751, 0.7303, 0.2192, 0.3321, 0.2488],
[ 1.0778, -1.9510, 0.7048, 0.4742, -0.7125]])
>>> torch.median(a, 1)
torch.return_types.median(values=tensor([-0.3982, 0.2270, 0.2488, 0.4742]), indices=tensor([1, 4, 4, 3]))
| https://pytorch.org/docs/stable/generated/torch.median.html | pytorch docs |
ConstantPad1d
class torch.nn.ConstantPad1d(padding, value)
Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in both boundaries. If a 2-tuple,
uses (\text{padding_left}, \text{padding_right})
Shape:
* Input: (C, W_{in}) or (N, C, W_{in}).
* Output: (C, W_{out}) or (N, C, W_{out}), where
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ConstantPad1d(2, 3.5)
>>> input = torch.randn(1, 2, 4)
>>> input
tensor([[[-1.0491, -0.7152, -0.0749, 0.8530],
[-1.3287, 1.8966, 0.1466, -0.2771]]])
>>> m(input)
tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000,
3.5000],
| https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html | pytorch docs |
3.5000],
[ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000,
3.5000]]])
>>> m = nn.ConstantPad1d(2, 3.5)
>>> input = torch.randn(1, 2, 3)
>>> input
tensor([[[ 1.6616, 1.4523, -1.1255],
[-3.6372, 0.1182, -1.8652]]])
>>> m(input)
tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000],
[ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]])
>>> # using different paddings for different sides
>>> m = nn.ConstantPad1d((3, 1), 3.5)
>>> m(input)
tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000],
[ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]]) | https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html | pytorch docs |
ZeroPad2d
class torch.nn.ZeroPad2d(padding)
Pads the input tensor boundaries with zero.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in all boundaries. If a 4-tuple,
uses (\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom})
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where
H_{out} = H_{in} + \text{padding\_top} +
\text{padding\_bottom}
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ZeroPad2d(2)
>>> input = torch.randn(1, 1, 3, 3)
>>> input
tensor([[[[-0.1678, -0.4418, 1.9466],
[ 0.9604, -0.4219, -0.5241],
[-0.9162, -0.5436, -0.6446]]]])
>>> m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.ZeroPad2d.html | pytorch docs |
m(input)
tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.1678, -0.4418, 1.9466, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.9604, -0.4219, -0.5241, 0.0000, 0.0000],
[ 0.0000, 0.0000, -0.9162, -0.5436, -0.6446, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
>>> # using different paddings for different sides
>>> m = nn.ZeroPad2d((1, 1, 2, 0))
>>> m(input)
tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.0000, -0.1678, -0.4418, 1.9466, 0.0000],
[ 0.0000, 0.9604, -0.4219, -0.5241, 0.0000],
[ 0.0000, -0.9162, -0.5436, -0.6446, 0.0000]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.ZeroPad2d.html | pytorch docs |
torch.Tensor.copysign
Tensor.copysign(other) -> Tensor
See "torch.copysign()" | https://pytorch.org/docs/stable/generated/torch.Tensor.copysign.html | pytorch docs |
torch.true_divide
torch.true_divide(dividend, divisor, *, out) -> Tensor
Alias for "torch.div()" with "rounding_mode=None". | https://pytorch.org/docs/stable/generated/torch.true_divide.html | pytorch docs |
torch.Tensor.scatter_reduce
Tensor.scatter_reduce(dim, index, src, reduce, *, include_self=True) -> Tensor
Out-of-place version of "torch.Tensor.scatter_reduce_()" | https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce.html | pytorch docs |
torch.linalg.ldl_solve
torch.linalg.ldl_solve(LD, pivots, B, *, hermitian=False, out=None) -> Tensor
Computes the solution of a system of linear equations using the LDL
factorization.
"LD" and "pivots" are the compact representation of the LDL
factorization and are expected to be computed by
"torch.linalg.ldl_factor_ex()". "hermitian" argument to this
function should be the same as the corresponding arguments in
"torch.linalg.ldl_factor_ex()".
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
Warning:
This function is "experimental" and it may change in a future
PyTorch release.
Parameters:
* LD (Tensor) -- the n times n matrix or the batch of
such matrices of size (, n, n)* where *** is one or more
batch dimensions. | https://pytorch.org/docs/stable/generated/torch.linalg.ldl_solve.html | pytorch docs |
batch dimensions.
* **pivots** (*Tensor*) -- the pivots corresponding to the LDL
factorization of "LD".
* **B** (*Tensor*) -- right-hand side tensor of shape *(*, n,
k)*.
Keyword Arguments:
* hermitian (bool, optional) -- whether to consider
the decomposed matrix to be Hermitian or symmetric. For real-
valued matrices, this switch has no effect. Default: False.
* **out** (*tuple**, **optional*) -- output tensor. *B* may be
passed as *out* and the result is computed in-place on *B*.
Ignored if *None*. Default: *None*.
Examples:
>>> A = torch.randn(2, 3, 3)
>>> A = A @ A.mT # make symmetric
>>> LD, pivots, info = torch.linalg.ldl_factor_ex(A)
>>> B = torch.randn(2, 3, 4)
>>> X = torch.linalg.ldl_solve(LD, pivots, B)
>>> torch.linalg.norm(A @ X - B)
>>> tensor(0.0001)
| https://pytorch.org/docs/stable/generated/torch.linalg.ldl_solve.html | pytorch docs |
torch.tan
torch.tan(input, *, out=None) -> Tensor
Returns a new tensor with the tangent of the elements of "input".
\text{out}_{i} = \tan(\text{input}_{i})
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.randn(4)
>>> a
tensor([-1.2027, -1.7687, 0.4412, -1.3856])
>>> torch.tan(a)
tensor([-2.5930, 4.9859, 0.4722, -5.3366])
| https://pytorch.org/docs/stable/generated/torch.tan.html | pytorch docs |
torch.Tensor.greater_equal_
Tensor.greater_equal_(other) -> Tensor
In-place version of "greater_equal()". | https://pytorch.org/docs/stable/generated/torch.Tensor.greater_equal_.html | pytorch docs |
default_fused_per_channel_wt_fake_quant
torch.quantization.fake_quantize.default_fused_per_channel_wt_fake_quant
alias of functools.partial(, observer=,
quant_min=-128, quant_max=127, dtype=torch.qint8,
qscheme=torch.per_channel_symmetric){} | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_per_channel_wt_fake_quant.html | pytorch docs |
torch.optim.Optimizer.state_dict
Optimizer.state_dict()
Returns the state of the optimizer as a "dict".
It contains two entries:
state - a dict holding current optimization state. Its content
differs between optimizer classes.
param_groups - a list containing all parameter groups where each
parameter group is a dict
| https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.state_dict.html | pytorch docs |
leaky_relu
class torch.ao.nn.quantized.functional.leaky_relu(input, negative_slope=0.01, inplace=False, scale=None, zero_point=None)
Quantized version of the. leaky_relu(input, negative_slope=0.01,
inplace=False, scale, zero_point) -> Tensor
Applies element-wise, \text{LeakyReLU}(x) = \max(0, x) +
\text{negative_slope} * \min(0, x)
Parameters:
* input (Tensor) -- Quantized input
* **negative_slope** (*float*) -- The slope of the negative
input
* **inplace** (*bool*) -- Inplace modification of the input
tensor
* **scale** (*Optional**[**float**]*) -- Scale and zero point of
the output tensor.
* **zero_point** (*Optional**[**int**]*) -- Scale and zero point
of the output tensor.
See "LeakyReLU" for more details. | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.leaky_relu.html | pytorch docs |
torch.func.jacfwd
torch.func.jacfwd(func, argnums=0, has_aux=False, *, randomness='error')
Computes the Jacobian of "func" with respect to the arg(s) at index
"argnum" using forward-mode autodiff
Parameters:
* func (function) -- A Python function that takes one or
more arguments, one of which must be a Tensor, and returns one
or more Tensors
* **argnums** (*int** or **Tuple**[**int**]*) -- Optional,
integer or tuple of integers, saying which arguments to get
the Jacobian with respect to. Default: 0.
* **has_aux** (*bool*) -- Flag indicating that "func" returns a
"(output, aux)" tuple where the first element is the output of
the function to be differentiated and the second element is
auxiliary objects that will not be differentiated. Default:
False.
* **randomness** (*str*) -- Flag indicating what type of
randomness to use. See "vmap()" for more detail. Allowed:
| https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html | pytorch docs |
"different", "same", "error". Default: "error"
Returns:
Returns a function that takes in the same inputs as "func" and
returns the Jacobian of "func" with respect to the arg(s) at
"argnums". If "has_aux is True", then the returned function
instead returns a "(jacobian, aux)" tuple where "jacobian" is
the Jacobian and "aux" is auxiliary objects returned by "func".
Note:
You may see this API error out with "forward-mode AD not
implemented for operator X". If so, please file a bug report and
we will prioritize it. An alternative is to use "jacrev()", which
has better operator coverage.
A basic usage with a pointwise, unary operation will give a
diagonal array as the Jacobian
from torch.func import jacfwd
x = torch.randn(5)
jacobian = jacfwd(torch.sin)(x)
expected = torch.diag(torch.cos(x))
assert torch.allclose(jacobian, expected)
"jacfwd()" can be composed with vmap to produce batched Jacobians: | https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html | pytorch docs |
from torch.func import jacfwd, vmap
x = torch.randn(64, 5)
jacobian = vmap(jacfwd(torch.sin))(x)
assert jacobian.shape == (64, 5, 5)
If you would like to compute the output of the function as well as
the jacobian of the function, use the "has_aux" flag to return the
output as an auxiliary object:
from torch.func import jacfwd
x = torch.randn(5)
def f(x):
return x.sin()
def g(x):
result = f(x)
return result, result
jacobian_f, f_x = jacfwd(g, has_aux=True)(x)
assert torch.allclose(f_x, f(x))
Additionally, "jacrev()" can be composed with itself or "jacrev()"
to produce Hessians
from torch.func import jacfwd, jacrev
def f(x):
return x.sin().sum()
x = torch.randn(5)
hessian = jacfwd(jacrev(f))(x)
assert torch.allclose(hessian, torch.diag(-x.sin()))
By default, "jacfwd()" computes the Jacobian with respect to the | https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html | pytorch docs |
first input. However, it can compute the Jacboian with respect to a
different argument by using "argnums":
from torch.func import jacfwd
def f(x, y):
return x + y ** 2
x, y = torch.randn(5), torch.randn(5)
jacobian = jacfwd(f, argnums=1)(x, y)
expected = torch.diag(2 * y)
assert torch.allclose(jacobian, expected)
Additionally, passing a tuple to "argnums" will compute the
Jacobian with respect to multiple arguments
from torch.func import jacfwd
def f(x, y):
return x + y ** 2
x, y = torch.randn(5), torch.randn(5)
jacobian = jacfwd(f, argnums=(0, 1))(x, y)
expectedX = torch.diag(torch.ones_like(x))
expectedY = torch.diag(2 * y)
assert torch.allclose(jacobian[0], expectedX)
assert torch.allclose(jacobian[1], expectedY)
| https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html | pytorch docs |
torch._foreach_exp
torch._foreach_exp(self: List[Tensor]) -> List[Tensor]
Apply "torch.exp()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_exp.html | pytorch docs |
torch.linalg.solve_ex
torch.linalg.solve_ex(A, B, *, left=True, check_errors=False, out=None)
A version of "solve()" that does not perform error checks unless
"check_errors"= True. It also returns the "info" tensor returned
by LAPACK's getrf.
Note:
When the inputs are on a CUDA device, this function synchronizes
only when "check_errors"*= True*.
Warning:
This function is "experimental" and it may change in a future
PyTorch release.
Parameters:
A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions.
Keyword Arguments:
* left (bool, optional) -- whether to solve the system
AX=B or XA = B. Default: True.
* **check_errors** (*bool**, **optional*) -- controls whether to
check the content of "infos" and raise an error if it is non-
zero. Default: *False*.
* **out** (*tuple**, **optional*) -- tuple of two tensors to
| https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html | pytorch docs |
write the output to. Ignored if None. Default: None.
Returns:
A named tuple (result, info).
Examples:
>>> A = torch.randn(3, 3)
>>> Ainv, info = torch.linalg.solve_ex(A)
>>> torch.dist(torch.linalg.inv(A), Ainv)
tensor(0.)
>>> info
tensor(0, dtype=torch.int32)
| https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html | pytorch docs |
torch.nn.functional.smooth_l1_loss
torch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0)
Function that uses a squared term if the absolute element-wise
error falls below beta and an L1 term otherwise.
See "SmoothL1Loss" for details.
Return type:
Tensor | https://pytorch.org/docs/stable/generated/torch.nn.functional.smooth_l1_loss.html | pytorch docs |
MaxPool2d
class torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
Applies a 2D max pooling over an input signal composed of several
input planes.
In the simplest case, the output value of the layer with input size
(N, C, H, W), output (N, C, H_{out}, W_{out}) and "kernel_size"
(kH, kW) can be precisely described as:
\begin{aligned} out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots,
kH-1} \max_{n=0, \ldots, kW-1} \\ &
\text{input}(N_i, C_j, \text{stride[0]} \times h + m,
\text{stride[1]} \times w + n) \end{aligned}
If "padding" is non-zero, then the input is implicitly padded with
negative infinity on both sides for "padding" number of points.
"dilation" controls the spacing between the kernel points. It is
harder to describe, but this link has a nice visualization of what
"dilation" does.
Note: | https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html | pytorch docs |
"dilation" does.
Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds
if they start within the left padding or the input. Sliding
windows that would start in the right padded region are ignored.
The parameters "kernel_size", "stride", "padding", "dilation" can
either be:
* a single "int" -- in which case the same value is used for the
height and width dimension
* a "tuple" of two ints -- in which case, the first *int* is
used for the height dimension, and the second *int* for the
width dimension
Parameters:
* kernel_size (Union[int, Tuple[int,
int]]) -- the size of the window to take a max over
* **stride** (*Union**[**int**, **Tuple**[**int**, **int**]**]*)
-- the stride of the window. Default value is "kernel_size"
* **padding** (*Union**[**int**, **Tuple**[**int**,
**int**]**]*) -- Implicit negative infinity padding to be
| https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html | pytorch docs |
added on both sides
* **dilation** (*Union**[**int**, **Tuple**[**int**,
**int**]**]*) -- a parameter that controls the stride of
elements in the window
* **return_indices** (*bool*) -- if "True", will return the max
indices along with the outputs. Useful for
"torch.nn.MaxUnpool2d" later
* **ceil_mode** (*bool*) -- when True, will use *ceil* instead
of *floor* to compute the output shape
Shape:
* Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in})
* Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),
where
H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]}
- \text{dilation[0]} \times (\text{kernel\_size[0]} -
1) - 1}{\text{stride[0]}} + 1\right\rfloor
W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]}
- \text{dilation[1]} \times (\text{kernel\_size[1]} -
1) - 1}{\text{stride[1]}} + 1\right\rfloor
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html | pytorch docs |
Examples:
>>> # pool of square window of size=3, stride=2
>>> m = nn.MaxPool2d(3, stride=2)
>>> # pool of non-square window
>>> m = nn.MaxPool2d((3, 2), stride=(2, 1))
>>> input = torch.randn(20, 16, 50, 32)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html | pytorch docs |
torch.jit.fork
torch.jit.fork(func, args, *kwargs)
Creates an asynchronous task executing func and a reference to
the value of the result of this execution. fork will return
immediately, so the return value of func may not have been
computed yet. To force completion of the task and access the return
value invoke torch.jit.wait on the Future. fork invoked with a
func which returns T is typed as torch.jit.Future[T]. fork
calls can be arbitrarily nested, and may be invoked with positional
and keyword arguments. Asynchronous execution will only occur when
run in TorchScript. If run in pure python, fork will not execute
in parallel. fork will also not execute in parallel when invoked
while tracing, however the fork and wait calls will be captured
in the exported IR Graph.
Warning:
*fork* tasks will execute non-deterministically. We recommend
only spawning parallel fork tasks for pure functions that do not
| https://pytorch.org/docs/stable/generated/torch.jit.fork.html | pytorch docs |
modify their inputs, module attributes, or global state.
Parameters:
* func (callable or torch.nn.Module) -- A Python
function or torch.nn.Module that will be invoked. If
executed in TorchScript, it will execute asynchronously,
otherwise it will not. Traced invocations of fork will be
captured in the IR.
* ***args** -- arguments to invoke *func* with.
* ****kwargs** -- arguments to invoke *func* with.
Returns:
a reference to the execution of func. The value T can only
be accessed by forcing completion of func through
torch.jit.wait.
Return type:
torch.jit.Future[T]
Example (fork a free function):
import torch
from torch import Tensor
def foo(a : Tensor, b : int) -> Tensor:
return a + b
def bar(a):
fut : torch.jit.Future[Tensor] = torch.jit.fork(foo, a, b=2)
return torch.jit.wait(fut)
script_bar = torch.jit.script(bar)
| https://pytorch.org/docs/stable/generated/torch.jit.fork.html | pytorch docs |
script_bar = torch.jit.script(bar)
input = torch.tensor(2)
# only the scripted version executes asynchronously
assert script_bar(input) == bar(input)
# trace is not run asynchronously, but fork is captured in IR
graph = torch.jit.trace(bar, (input,)).graph
assert "fork" in str(graph)
Example (fork a module method):
import torch
from torch import Tensor
class AddMod(torch.nn.Module):
def forward(self, a: Tensor, b : int):
return a + b
class Mod(torch.nn.Module):
def __init__(self):
super(self).__init__()
self.mod = AddMod()
def forward(self, input):
fut = torch.jit.fork(self.mod, a, b=2)
return torch.jit.wait(fut)
input = torch.tensor(2)
mod = Mod()
assert mod(input) == torch.jit.script(mod).forward(input)
| https://pytorch.org/docs/stable/generated/torch.jit.fork.html | pytorch docs |
torch.Tensor.conj
Tensor.conj() -> Tensor
See "torch.conj()" | https://pytorch.org/docs/stable/generated/torch.Tensor.conj.html | pytorch docs |
torch.nn.functional.logsigmoid
torch.nn.functional.logsigmoid(input) -> Tensor
Applies element-wise \text{LogSigmoid}(x_i) = \log \left(\frac{1}{1
+ \exp(-x_i)}\right)
See "LogSigmoid" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.logsigmoid.html | pytorch docs |
Parameter
class torch.nn.parameter.Parameter(data=None, requires_grad=True)
A kind of Tensor that is to be considered a module parameter.
Parameters are "Tensor" subclasses, that have a very special
property when used with "Module" s - when they're assigned as
Module attributes they are automatically added to the list of its
parameters, and will appear e.g. in "parameters()" iterator.
Assigning a Tensor doesn't have such effect. This is because one
might want to cache some temporary state, like last hidden state of
the RNN, in the model. If there was no such class as "Parameter",
these temporaries would get registered too.
Parameters:
* data (Tensor) -- parameter tensor.
* **requires_grad** (*bool**, **optional*) -- if the parameter
requires gradient. See Locally disabling gradient computation
for more details. Default: *True*
| https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html | pytorch docs |
torch.foreach_lgamma
torch.foreach_lgamma(self: List[Tensor]) -> None
Apply "torch.lgamma()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_lgamma_.html | pytorch docs |
torch.Tensor.q_zero_point
Tensor.q_zero_point() -> int
Given a Tensor quantized by linear(affine) quantization, returns
the zero_point of the underlying quantizer(). | https://pytorch.org/docs/stable/generated/torch.Tensor.q_zero_point.html | pytorch docs |
torch.Tensor.dim
Tensor.dim() -> int
Returns the number of dimensions of "self" tensor. | https://pytorch.org/docs/stable/generated/torch.Tensor.dim.html | pytorch docs |
PlaceholderObserver
class torch.quantization.observer.PlaceholderObserver(dtype=torch.float32, custom_op_name='', compute_dtype=None, quant_min=None, quant_max=None, is_dynamic=False)
Observer that doesn't do anything and just passes its configuration
to the quantized module's ".from_float()".
Can be used for quantization to float16 which doesn't require
determining ranges.
Parameters:
* dtype -- dtype argument to the quantize node needed to
implement the reference model spec.
* **quant_min** -- minimum value in quantized domain (TODO:
align behavior with other observers)
* **quant_min** -- maximum value in quantized domain
* **custom_op_name** -- (temporary) specify this observer for an
operator that doesn't require any observation (Can be used in
Graph Mode Passes for special case ops).
* **compute_dtype** (*deprecated*) -- if set, marks the future
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.PlaceholderObserver.html | pytorch docs |
quantize function to use dynamic quantization instead of
static quantization. This field is deprecated, use
is_dynamic=True instead.
* **is_dynamic** -- if True, the *quantize* function in the
reference model representation taking stats from this observer
instance will use dynamic quantization.
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.PlaceholderObserver.html | pytorch docs |
torch.Tensor.element_size
Tensor.element_size() -> int
Returns the size in bytes of an individual element.
Example:
>>> torch.tensor([]).element_size()
4
>>> torch.tensor([], dtype=torch.uint8).element_size()
1
| https://pytorch.org/docs/stable/generated/torch.Tensor.element_size.html | pytorch docs |
torch.Tensor.sin_
Tensor.sin_() -> Tensor
In-place version of "sin()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sin_.html | pytorch docs |
torch.Tensor.lcm
Tensor.lcm(other) -> Tensor
See "torch.lcm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.lcm.html | pytorch docs |
torch.nn.utils.parametrize.is_parametrized
torch.nn.utils.parametrize.is_parametrized(module, tensor_name=None)
Returns "True" if module has an active parametrization.
If the argument "tensor_name" is specified, returns "True" if
"module[tensor_name]" is parametrized.
Parameters:
* module (nn.Module) -- module to query
* **tensor_name** (*str**, **optional*) -- attribute in the
module to query Default: "None"
Return type:
bool | https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.is_parametrized.html | pytorch docs |
torch.Tensor.scatter_reduce_
Tensor.scatter_reduce_(dim, index, src, reduce, *, include_self=True) -> Tensor
Reduces all values from the "src" tensor to the indices specified
in the "index" tensor in the "self" tensor using the applied
reduction defined via the "reduce" argument (""sum"", ""prod"",
""mean"", ""amax"", ""amin""). For each value in "src", it is
reduced to an index in "self" which is specified by its index in
"src" for "dimension != dim" and by the corresponding value in
"index" for "dimension = dim". If "include_self="True"", the values
in the "self" tensor are included in the reduction.
"self", "index" and "src" should all have the same number of
dimensions. It is also required that "index.size(d) <= src.size(d)"
for all dimensions "d", and that "index.size(d) <= self.size(d)"
for all dimensions "d != dim". Note that "index" and "src" do not
broadcast.
For a 3-D tensor with "reduce="sum"" and "include_self=True" the | https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html | pytorch docs |
output is given as:
self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0
self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1
self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2
Note:
This operation may behave nondeterministically when given tensors
on a CUDA device. See Reproducibility for more information.
Note:
The backward pass is implemented only for "src.shape ==
index.shape".
Warning:
This function is in beta and may change in the near future.
Parameters:
* dim (int) -- the axis along which to index
* **index** (*LongTensor*) -- the indices of elements to scatter
and reduce.
* **src** (*Tensor*) -- the source elements to scatter and
reduce
* **reduce** (*str*) -- the reduction operation to apply for
non-unique indices (""sum"", ""prod"", ""mean"", ""amax"",
""amin"")
* **include_self** (*bool*) -- whether elements from the "self"
| https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html | pytorch docs |
tensor are included in the reduction
Example:
>>> src = torch.tensor([1., 2., 3., 4., 5., 6.])
>>> index = torch.tensor([0, 1, 0, 1, 2, 1])
>>> input = torch.tensor([1., 2., 3., 4.])
>>> input.scatter_reduce(0, index, src, reduce="sum")
tensor([5., 14., 8., 4.])
>>> input.scatter_reduce(0, index, src, reduce="sum", include_self=False)
tensor([4., 12., 5., 4.])
>>> input2 = torch.tensor([5., 4., 3., 2.])
>>> input2.scatter_reduce(0, index, src, reduce="amax")
tensor([5., 6., 5., 2.])
>>> input2.scatter_reduce(0, index, src, reduce="amax", include_self=False)
tensor([3., 6., 5., 2.])
| https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html | pytorch docs |
torch._foreach_sinh
torch._foreach_sinh(self: List[Tensor]) -> List[Tensor]
Apply "torch.sinh()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_sinh.html | pytorch docs |
torch.negative
torch.negative(input, *, out=None) -> Tensor
Alias for "torch.neg()" | https://pytorch.org/docs/stable/generated/torch.negative.html | pytorch docs |
ReflectionPad3d
class torch.nn.ReflectionPad3d(padding)
Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use "torch.nn.functional.pad()".
Parameters:
padding (int, tuple) -- the size of the padding. If is
int, uses the same padding in all boundaries. If a 6-tuple,
uses (\text{padding_left}, \text{padding_right},
\text{padding_top}, \text{padding_bottom},
\text{padding_front}, \text{padding_back})
Shape:
* Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},
W_{in}).
* Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},
H_{out}, W_{out}), where
D_{out} = D_{in} + \text{padding\_front} +
\text{padding\_back}
H_{out} = H_{in} + \text{padding\_top} +
\text{padding\_bottom}
W_{out} = W_{in} + \text{padding\_left} +
\text{padding\_right}
Examples:
>>> m = nn.ReflectionPad3d(1)
| https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad3d.html | pytorch docs |
Examples:
>>> m = nn.ReflectionPad3d(1)
>>> input = torch.arange(8, dtype=torch.float).reshape(1, 1, 2, 2, 2)
>>> m(input)
tensor([[[[[7., 6., 7., 6.],
[5., 4., 5., 4.],
[7., 6., 7., 6.],
[5., 4., 5., 4.]],
[[3., 2., 3., 2.],
[1., 0., 1., 0.],
[3., 2., 3., 2.],
[1., 0., 1., 0.]],
[[7., 6., 7., 6.],
[5., 4., 5., 4.],
[7., 6., 7., 6.],
[5., 4., 5., 4.]],
[[3., 2., 3., 2.],
[1., 0., 1., 0.],
[3., 2., 3., 2.],
[1., 0., 1., 0.]]]]])
| https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad3d.html | pytorch docs |
torch.nn.functional.grid_sample
torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None)
Given an "input" and a flow-field "grid", computes the "output"
using "input" values and pixel locations from "grid".
Currently, only spatial (4-D) and volumetric (5-D) "input" are
supported.
In the spatial (4-D) case, for "input" with shape (N, C,
H_\text{in}, W_\text{in}) and "grid" with shape (N, H_\text{out},
W_\text{out}, 2), the output will have shape (N, C, H_\text{out},
W_\text{out}).
For each output location "output[n, :, h, w]", the size-2 vector
"grid[n, h, w]" specifies "input" pixel locations "x" and "y",
which are used to interpolate the output value "output[n, :, h,
w]". In the case of 5D inputs, "grid[n, d, h, w]" specifies the
"x", "y", "z" pixel locations for interpolating "output[n, :, d, h,
w]". "mode" argument specifies "nearest" or "bilinear" | https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html | pytorch docs |
interpolation method to sample the input pixels.
"grid" specifies the sampling pixel locations normalized by the
"input" spatial dimensions. Therefore, it should have most values
in the range of "[-1, 1]". For example, values "x = -1, y = -1" is
the left-top pixel of "input", and values "x = 1, y = 1" is the
right-bottom pixel of "input".
If "grid" has values outside the range of "[-1, 1]", the
corresponding outputs are handled as defined by "padding_mode".
Options are
* "padding_mode="zeros"": use "0" for out-of-bound grid
locations,
* "padding_mode="border"": use border values for out-of-bound
grid locations,
* "padding_mode="reflection"": use values at locations reflected
by the border for out-of-bound grid locations. For location
far away from the border, it will keep being reflected until
becoming in bound, e.g., (normalized) pixel location "x =
-3.5" reflects by border "-1" and becomes "x' = 1.5", then
| https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html | pytorch docs |
reflects by border "1" and becomes "x'' = -0.5".
Note:
This function is often used in conjunction with "affine_grid()"
to build Spatial Transformer Networks .
Note:
When using the CUDA backend, this operation may induce
nondeterministic behaviour in its backward pass that is not
easily switched off. Please see the notes on Reproducibility for
background.
Note:
NaN values in "grid" would be interpreted as "-1".
Parameters:
* input (Tensor) -- input of shape (N, C, H_\text{in},
W_\text{in}) (4-D case) or (N, C, D_\text{in}, H_\text{in},
W_\text{in}) (5-D case)
* **grid** (*Tensor*) -- flow-field of shape (N, H_\text{out},
W_\text{out}, 2) (4-D case) or (N, D_\text{out}, H_\text{out},
W_\text{out}, 3) (5-D case)
* **mode** (*str*) -- interpolation mode to calculate output
values "'bilinear'" | "'nearest'" | "'bicubic'". Default:
| https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html | pytorch docs |
"'bilinear'" Note: "mode='bicubic'" supports only 4-D input.
When "mode='bilinear'" and the input is 5-D, the interpolation
mode used internally will actually be trilinear. However, when
the input is 4-D, the interpolation mode will legitimately be
bilinear.
* **padding_mode** (*str*) -- padding mode for outside grid
values "'zeros'" | "'border'" | "'reflection'". Default:
"'zeros'"
* **align_corners** (*bool**, **optional*) -- Geometrically, we
consider the pixels of the input as squares rather than
points. If set to "True", the extrema ("-1" and "1") are
considered as referring to the center points of the input's
corner pixels. If set to "False", they are instead considered
as referring to the corner points of the input's corner
pixels, making the sampling more resolution agnostic. This
option parallels the "align_corners" option in
| https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html | pytorch docs |
"interpolate()", and so whichever option is used here should
also be used there to resize the input image before grid
sampling. Default: "False"
Returns:
output Tensor
Return type:
output (Tensor)
Warning:
When "align_corners = True", the grid positions depend on the
pixel size relative to the input image size, and so the locations
sampled by "grid_sample()" will differ for the same input given
at different resolutions (that is, after being upsampled or
downsampled). The default behavior up to version 1.2.0 was
"align_corners = True". Since then, the default behavior has been
changed to "align_corners = False", in order to bring it in line
with the default for "interpolate()".
Note:
"mode='bicubic'" is implemented using the cubic convolution
algorithm with \alpha=-0.75. The constant \alpha might be
different from packages to packages. For example, PIL and OpenCV
| https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html | pytorch docs |
use -0.5 and -0.75 respectively. This algorithm may "overshoot"
the range of values it's interpolating. For example, it may
produce negative values or values greater than 255 when
interpolating input in [0, 255]. Clamp the results with :func:
torch.clamp to ensure they are within the valid range. | https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html | pytorch docs |
torch.isin
torch.isin(elements, test_elements, *, assume_unique=False, invert=False) -> Tensor
Tests if each element of "elements" is in "test_elements". Returns
a boolean tensor of the same shape as "elements" that is True for
elements in "test_elements" and False otherwise.
Note:
One of "elements" or "test_elements" can be a scalar, but not
both.
Parameters:
* elements (Tensor or Scalar) -- Input elements
* **test_elements** (*Tensor** or **Scalar*) -- Values against
which to test for each input element
* **assume_unique** (*bool**, **optional*) -- If True, assumes
both "elements" and "test_elements" contain unique elements,
which can speed up the calculation. Default: False
* **invert** (*bool**, **optional*) -- If True, inverts the
boolean return tensor, resulting in True values for elements
*not* in "test_elements". Default: False
Returns: | https://pytorch.org/docs/stable/generated/torch.isin.html | pytorch docs |
Returns:
A boolean tensor of the same shape as "elements" that is True
for elements in "test_elements" and False otherwise
-[ Example ]-
torch.isin(torch.tensor([[1, 2], [3, 4]]), torch.tensor([2, 3]))
tensor([[False, True],
[ True, False]])
| https://pytorch.org/docs/stable/generated/torch.isin.html | pytorch docs |
BackendPatternConfig
class torch.ao.quantization.backend_config.BackendPatternConfig(pattern=None)
Config object that specifies quantization behavior for a given
operator pattern. For a detailed example usage, see
"BackendConfig".
add_dtype_config(dtype_config)
Add a set of supported data types passed as arguments to
quantize ops in the reference model spec.
Return type:
*BackendPatternConfig*
classmethod from_dict(backend_pattern_config_dict)
Create a "BackendPatternConfig" from a dictionary with the
following items:
"pattern": the pattern being configured "observation_type":
the "ObservationType" that specifies how observers should be
inserted for this pattern "dtype_configs": a list of
dictionaries that represents "DTypeConfig" s "root_module": a
"torch.nn.Module" that represents the root for this pattern
"qat_module": a "torch.nn.Module" that represents the QAT
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html | pytorch docs |
implementation for this pattern "reference_quantized_module":
a "torch.nn.Module" that represents the reference quantized
implementation for this pattern's root module.
"fused_module": a "torch.nn.Module" that represents the fused
implementation for this pattern "fuser_method": a function
that specifies how to fuse the pattern for this pattern
"pattern_complex_format": the pattern specified in the
reversed nested tuple format (deprecated)
Return type:
*BackendPatternConfig*
set_dtype_configs(dtype_configs)
Set the supported data types passed as arguments to quantize ops
in the reference model spec, overriding all previously
registered data types.
Return type:
*BackendPatternConfig*
set_fused_module(fused_module)
Set the module that represents the fused implementation for this
pattern.
Return type:
*BackendPatternConfig*
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html | pytorch docs |
BackendPatternConfig
set_fuser_method(fuser_method)
Set the function that specifies how to fuse this
BackendPatternConfig's pattern.
The first argument of this function should be *is_qat*, and the
rest of the arguments should be the items in the tuple pattern.
The return value of this function should be the resulting fused
module.
For example, the fuser method for the pattern *(torch.nn.Linear,
torch.nn.ReLU)* can be:
def fuse_linear_relu(is_qat, linear, relu):
return torch.ao.nn.intrinsic.LinearReLU(linear, relu)
For a more complicated example, see https://gist.github.com/jer
ryzh168/8bea7180a8ba3c279f2c9b050f2a69a6.
Return type:
*BackendPatternConfig*
set_observation_type(observation_type)
Set how observers should be inserted in the graph for this
pattern.
Observation type here refers to how observers (or quant-dequant
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html | pytorch docs |
ops) will be placed in the graph. This is used to produce the
desired reference patterns understood by the backend. Weighted
ops such as linear and conv require different observers (or
quantization parameters passed to quantize ops in the reference
model) for the input and the output.
There are two observation types:
*OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT* (default): the
output observer instance will be different from the input.
This is the most common observation type.
*OUTPUT_SHARE_OBSERVER_WITH_INPUT*: the output observer
instance will be the same as the input. This is useful for
operators like *cat*.
Note: This will be renamed in the near future, since we will
soon insert QuantDeQuantStubs with observers (and fake
quantizes) attached instead of observers themselves.
Return type:
*BackendPatternConfig*
set_pattern(pattern)
Set the pattern to configure.
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html | pytorch docs |
Set the pattern to configure.
The pattern can be a float module, functional operator, pytorch
operator, or a tuple combination of the above. Tuple patterns
are treated as sequential patterns, and currently only tuples of
2 or 3 elements are supported.
Return type:
*BackendPatternConfig*
set_qat_module(qat_module)
Set the module that represents the QAT implementation for this
pattern.
Return type:
*BackendPatternConfig*
set_reference_quantized_module(reference_quantized_module)
Set the module that represents the reference quantized
implementation for this pattern's root module.
For more detail, see "set_root_module()".
Return type:
*BackendPatternConfig*
set_root_module(root_module)
Set the module that represents the root for this pattern.
When we construct the reference quantized model during the
convert phase, the root modules (e.g. torch.nn.Linear for
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html | pytorch docs |
torch.ao.nn.intrinsic.LinearReLU) will be swapped to the
corresponding reference quantized modules (e.g.
torch.ao.nn.reference.quantized.Linear). This allows custom
backends to specify custom reference quantized module
implementations to match the numerics of their lowered
operators. Since this is a one-to-one mapping, both the root
module and the reference quantized module must be specified in
the same BackendPatternConfig in order for the conversion to
take place.
Return type:
*BackendPatternConfig*
to_dict()
Convert this "BackendPatternConfig" to a dictionary with the
items described in "from_dict()".
Return type:
*Dict*[str, *Any*]
| https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html | pytorch docs |
torch.randn
torch.randn(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor
Returns a tensor filled with random numbers from a normal
distribution with mean 0 and variance 1 (also called the
standard normal distribution).
\text{out}_{i} \sim \mathcal{N}(0, 1)
The shape of the tensor is defined by the variable argument "size".
Parameters:
size (int...) -- a sequence of integers defining the
shape of the output tensor. Can be a variable number of
arguments or a collection like a list or tuple.
Keyword Arguments:
* generator ("torch.Generator", optional) -- a pseudorandom
number generator for sampling
* **out** (*Tensor**, **optional*) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
| https://pytorch.org/docs/stable/generated/torch.randn.html | pytorch docs |
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **pin_memory** (*bool**, **optional*) -- If set, returned
tensor would be allocated in the pinned memory. Works only for
CPU tensors. Default: "False".
Example:
>>> torch.randn(4)
tensor([-2.1436, 0.9966, 2.3426, -0.6366])
>>> torch.randn(2, 3)
tensor([[ 1.5954, 2.8929, -1.0923],
| https://pytorch.org/docs/stable/generated/torch.randn.html | pytorch docs |
tensor([[ 1.5954, 2.8929, -1.0923],
[ 1.1719, -0.4709, -0.1996]]) | https://pytorch.org/docs/stable/generated/torch.randn.html | pytorch docs |
torch.linalg.lu_solve
torch.linalg.lu_solve(LU, pivots, B, *, left=True, adjoint=False, out=None) -> Tensor
Computes the solution of a square system of linear equations with a
unique solution given an LU decomposition.
Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, this function
computes the solution X \in \mathbb{K}^{n \times k} of the linear
system associated to A \in \mathbb{K}^{n \times n}, B \in
\mathbb{K}^{n \times k}, which is defined as
AX = B
where A is given factorized as returned by "lu_factor()".
If "left"= False, this function returns the matrix X \in
\mathbb{K}^{n \times k} that solves the system
XA = B\mathrlap{\qquad A \in \mathbb{K}^{k \times k}, B \in
\mathbb{K}^{n \times k}.}
If "adjoint"= True (and "left"= True), given an LU
factorization of :math:`A this function function returns the X \in
\mathbb{K}^{n \times k} that solves the system | https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html | pytorch docs |
\mathbb{K}^{n \times k} that solves the system
A^{\text{H}}X = B\mathrlap{\qquad A \in \mathbb{K}^{k \times k},
B \in \mathbb{K}^{n \times k}.}
where A^{\text{H}} is the conjugate transpose when A is complex,
and the transpose when A is real-valued. The "left"= False case
is analogous.
Supports inputs of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if the inputs are batches of
matrices then the output has the same batch dimensions.
Parameters:
* LU (Tensor) -- tensor of shape (, n, n) (or (, k,
k) if "left"= True) where *** is zero or more batch
dimensions as returned by "lu_factor()".
* **pivots** (*Tensor*) -- tensor of shape *(*, n)* (or *(*, k)*
if "left"*= True*) where *** is zero or more batch dimensions
as returned by "lu_factor()".
* **B** (*Tensor*) -- right-hand side tensor of shape *(*, n,
k)*.
Keyword Arguments: | https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html | pytorch docs |
k)*.
Keyword Arguments:
* left (bool, optional) -- whether to solve the system
AX=B or XA = B. Default: True.
* **adjoint** (*bool**, **optional*) -- whether to solve the
system AX=B or A^{\text{H}}X = B. Default: *False*.
* **out** (*Tensor**, **optional*) -- output tensor. Ignored if
*None*. Default: *None*.
Examples:
>>> A = torch.randn(3, 3)
>>> LU, pivots = torch.linalg.lu_factor(A)
>>> B = torch.randn(3, 2)
>>> X = torch.linalg.lu_solve(LU, pivots, B)
>>> torch.allclose(A @ X, B)
True
>>> B = torch.randn(3, 3, 2) # Broadcasting rules apply: A is broadcasted
>>> X = torch.linalg.lu_solve(LU, pivots, B)
>>> torch.allclose(A @ X, B)
True
>>> B = torch.randn(3, 5, 3)
>>> X = torch.linalg.lu_solve(LU, pivots, B, left=False)
>>> torch.allclose(X @ A, B)
True
>>> B = torch.randn(3, 3, 4) # Now solve for A^T
| https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html | pytorch docs |
X = torch.linalg.lu_solve(LU, pivots, B, adjoint=True)
>>> torch.allclose(A.mT @ X, B)
True
| https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html | pytorch docs |
torch.sgn
torch.sgn(input, *, out=None) -> Tensor
This function is an extension of torch.sign() to complex tensors.
It computes a new tensor whose elements have the same angles as the
corresponding elements of "input" and absolute values (i.e.
magnitudes) of one for complex tensors and is equivalent to
torch.sign() for non-complex tensors.
\text{out}_{i} = \begin{cases} 0 &
|\text{{input}}_i| == 0 \\
\frac{{\text{{input}}_i}}{|{\text{{input}}_i}|} &
\text{otherwise} \end{cases}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> t = torch.tensor([3+4j, 7-24j, 0, 1+2j])
>>> t.sgn()
tensor([0.6000+0.8000j, 0.2800-0.9600j, 0.0000+0.0000j, 0.4472+0.8944j])
| https://pytorch.org/docs/stable/generated/torch.sgn.html | pytorch docs |
torch.matrix_power
torch.matrix_power(input, n, *, out=None) -> Tensor
Alias for "torch.linalg.matrix_power()" | https://pytorch.org/docs/stable/generated/torch.matrix_power.html | pytorch docs |
torch.Tensor.storage_type
Tensor.storage_type() -> type
Returns the type of the underlying storage. | https://pytorch.org/docs/stable/generated/torch.Tensor.storage_type.html | pytorch docs |
torch.cuda.OutOfMemoryError
exception torch.cuda.OutOfMemoryError
Exception raised when CUDA is out of memory | https://pytorch.org/docs/stable/generated/torch.cuda.OutOfMemoryError.html | pytorch docs |
torch.as_tensor
torch.as_tensor(data, dtype=None, device=None) -> Tensor
Converts "data" into a tensor, sharing data and preserving autograd
history if possible.
If "data" is already a tensor with the requested dtype and device
then "data" itself is returned, but if "data" is a tensor with a
different dtype or device then it's copied as if using
data.to(dtype=dtype, device=device).
If "data" is a NumPy array (an ndarray) with the same dtype and
device then a tensor is constructed using "torch.from_numpy()".
See also:
"torch.tensor()" never shares its data and creates a new "leaf
tensor" (see Autograd mechanics).
Parameters:
* data (array_like) -- Initial data for the tensor. Can be
a list, tuple, NumPy "ndarray", scalar, and other types.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", infers data type from
"data".
| https://pytorch.org/docs/stable/generated/torch.as_tensor.html | pytorch docs |
"data".
* **device** ("torch.device", optional) -- the device of the
constructed tensor. If None and data is a tensor then the
device of data is used. If None and data is not a tensor then
the result tensor is constructed on the CPU.
Example:
>>> a = numpy.array([1, 2, 3])
>>> t = torch.as_tensor(a)
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([-1, 2, 3])
>>> a = numpy.array([1, 2, 3])
>>> t = torch.as_tensor(a, device=torch.device('cuda'))
>>> t
tensor([ 1, 2, 3])
>>> t[0] = -1
>>> a
array([1, 2, 3])
| https://pytorch.org/docs/stable/generated/torch.as_tensor.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.