text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
embedding dimension "embed_dim".
* **attn_output_weights** - Only returned when
"need_weights=True". If "average_attn_weights=True",
returns attention weights averaged across heads of shape
(L, S) when input is unbatched or (N, L, S), where N is the
batch size, L is the target sequence length, and S is the
source sequence length. If "average_attn_weights=False",
returns attention weights per head of shape
(\text{num\_heads}, L, S) when input is unbatched or (N,
\text{num\_heads}, L, S).
Note:
*batch_first* argument is ignored for unbatched inputs.
merge_masks(attn_mask, key_padding_mask, query)
Determine mask type and combine masks if necessary. If only one
mask is provided, that mask and the corresponding mask type will
be returned. If both masks are provided, they will be both
expanded to shape "(batch_size, num_heads, seq_len, seq_len)",
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
combined with logical "or" and mask type 2 will be returned
:param attn_mask: attention mask of shape "(seq_len, seq_len)",
mask type 0 :param key_padding_mask: padding mask of shape
"(batch_size, seq_len)", mask type 1 :param query: query
embeddings of shape "(batch_size, seq_len, embed_dim)"
Returns:
merged mask mask_type: merged mask type (0, 1, or 2)
Return type:
merged_mask
| https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html | pytorch docs |
torch.bitwise_xor
torch.bitwise_xor(input, other, *, out=None) -> Tensor
Computes the bitwise XOR of "input" and "other". The input tensor
must be of integral or Boolean types. For bool tensors, it computes
the logical XOR.
Parameters:
* input -- the first input tensor
* **other** -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.bitwise_xor(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))
tensor([-2, -2, 0], dtype=torch.int8)
>>> torch.bitwise_xor(torch.tensor([True, True, False]), torch.tensor([False, True, False]))
tensor([ True, False, False])
| https://pytorch.org/docs/stable/generated/torch.bitwise_xor.html | pytorch docs |
torch.cuda.list_gpu_processes
torch.cuda.list_gpu_processes(device=None)
Returns a human-readable printout of the running processes and
their GPU memory use for a given device.
This can be useful to display periodically during training, or when
handling out-of-memory exceptions.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns printout for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
str | https://pytorch.org/docs/stable/generated/torch.cuda.list_gpu_processes.html | pytorch docs |
torch.full_like
torch.full_like(input, fill_value, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor
Returns a tensor with the same size as "input" filled with
"fill_value". "torch.full_like(input, fill_value)" is equivalent to
"torch.full(input.size(), fill_value, dtype=input.dtype,
layout=input.layout, device=input.device)".
Parameters:
* input (Tensor) -- the size of "input" will determine
size of the output tensor.
* **fill_value** -- the number to fill the output tensor with.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned Tensor. Default: if "None", defaults to the dtype
of "input".
* **layout** ("torch.layout", optional) -- the desired layout of
returned tensor. Default: if "None", defaults to the layout of
"input".
| https://pytorch.org/docs/stable/generated/torch.full_like.html | pytorch docs |
"input".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", defaults to the device of
"input".
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **memory_format** ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
| https://pytorch.org/docs/stable/generated/torch.full_like.html | pytorch docs |
ConvTranspose2d
class torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
Applies a 2D transposed convolution operator over an input image
composed of several input planes.
This module can be seen as the gradient of Conv2d with respect to
its input. It is also known as a fractionally-strided convolution
or a deconvolution (although it is not an actual deconvolution
operation as it does not compute a true inverse of convolution).
For more information, see the visualizations here and the
Deconvolutional Networks paper.
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
"stride" controls the stride for the cross-correlation.
"padding" controls the amount of implicit zero padding on both
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
sides for "dilation * (kernel_size - 1) - padding" number of
points. See note below for details.
"output_padding" controls the additional size added to one side
of the output shape. See note below for details.
"dilation" controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but the
link here has a nice visualization of what "dilation" does.
"groups" controls the connections between inputs and outputs.
"in_channels" and "out_channels" must both be divisible by
"groups". For example,
* At groups=1, all inputs are convolved to all outputs.
* At groups=2, the operation becomes equivalent to having two
conv layers side by side, each seeing half the input
channels and producing half the output channels, and both
subsequently concatenated.
* At groups= "in_channels", each input channel is convolved
with its own set of filters (of size
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
with its own set of filters (of size
\frac{\text{out_channels}}{\text{in_channels}}).
The parameters "kernel_size", "stride", "padding", "output_padding"
can either be:
* a single "int" -- in which case the same value is used for the
height and width dimensions
* a "tuple" of two ints -- in which case, the first *int* is
used for the height dimension, and the second *int* for the
width dimension
Note:
The "padding" argument effectively adds "dilation * (kernel_size
- 1) - padding" amount of zero padding to both sizes of the
input. This is set so that when a "Conv2d" and a
"ConvTranspose2d" are initialized with same parameters, they are
inverses of each other in regard to the input and output shapes.
However, when "stride > 1", "Conv2d" maps multiple input shapes
to the same output shape. "output_padding" is provided to resolve
this ambiguity by effectively increasing the calculated output
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
shape on one side. Note that "output_padding" is only used to
find output shape, but does not actually add zero-padding to
output.
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Parameters:
* in_channels (int) -- Number of channels in the input
image
* **out_channels** (*int*) -- Number of channels produced by the
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- "dilation *
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
(kernel_size - 1) - padding" zero-padding will be added to
both sides of each dimension in the input. Default: 0
* **output_padding** (*int** or **tuple**, **optional*) --
Additional size added to one side of each dimension in the
output shape. Default: 0
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
Shape:
* Input: (N, C_{in}, H_{in}, W_{in}) or (C_{in}, H_{in}, W_{in})
* Output: (N, C_{out}, H_{out}, W_{out}) or (C_{out}, H_{out},
W_{out}), where
H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times
\text{padding}[0] + \text{dilation}[0] \times
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
(\text{kernel_size}[0] - 1) + \text{output_padding}[0] + 1
W_{out} = (W_{in} - 1) \times \text{stride}[1] - 2 \times
\text{padding}[1] + \text{dilation}[1] \times
(\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1
Variables:
* weight (Tensor) -- the learnable weights of the module
of shape (\text{in_channels},
\frac{\text{out_channels}}{\text{groups}},
\text{kernel_size[0]}, \text{kernel_size[1]}). The values of
these weights are sampled from \mathcal{U}(-\sqrt{k},
\sqrt{k}) where k = \frac{groups}{C_\text{out} *
\prod_{i=0}^{1}\text{kernel_size}[i]}
* **bias** (*Tensor*) -- the learnable bias of the module of
shape (out_channels) If "bias" is "True", then the values of
these weights are sampled from \mathcal{U}(-\sqrt{k},
\sqrt{k}) where k = \frac{groups}{C_\text{out} *
\prod_{i=0}^{1}\text{kernel\_size}[i]}
Examples: | https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
Examples:
>>> # With square kernels and equal stride
>>> m = nn.ConvTranspose2d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50, 100)
>>> output = m(input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12, 12)
>>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)
>>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(input)
>>> h.size()
torch.Size([1, 16, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12, 12])
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html | pytorch docs |
torch.cuda.comm.reduce_add
torch.cuda.comm.reduce_add(inputs, destination=None)
Sums tensors from multiple GPUs.
All inputs should have matching shapes, dtype, and layout. The
output tensor will be of the same shape, dtype, and layout.
Parameters:
* inputs (Iterable[Tensor]) -- an iterable of
tensors to add.
* **destination** (*int**, **optional*) -- a device on which the
output will be placed (default: current device).
Returns:
A tensor containing an elementwise sum of all inputs, placed on
the "destination" device. | https://pytorch.org/docs/stable/generated/torch.cuda.comm.reduce_add.html | pytorch docs |
torch.Tensor.negative
Tensor.negative() -> Tensor
See "torch.negative()" | https://pytorch.org/docs/stable/generated/torch.Tensor.negative.html | pytorch docs |
torch.Tensor.t
Tensor.t() -> Tensor
See "torch.t()" | https://pytorch.org/docs/stable/generated/torch.Tensor.t.html | pytorch docs |
torch.Tensor.cauchy_
Tensor.cauchy_(median=0, sigma=1, *, generator=None) -> Tensor
Fills the tensor with numbers drawn from the Cauchy distribution:
f(x) = \dfrac{1}{\pi} \dfrac{\sigma}{(x - \text{median})^2 +
\sigma^2}
| https://pytorch.org/docs/stable/generated/torch.Tensor.cauchy_.html | pytorch docs |
torch.autograd.functional.hvp
torch.autograd.functional.hvp(func, inputs, v=None, create_graph=False, strict=False)
Function that computes the dot product between the Hessian of a
given scalar function and a vector "v" at the point given by the
inputs.
Parameters:
* func (function) -- a Python function that takes Tensor
inputs and returns a Tensor with a single element.
* **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the
function "func".
* **v** (*tuple of Tensors** or **Tensor*) -- The vector for
which the Hessian vector product is computed. Must be the same
size as the input of "func". This argument is optional when
"func"'s input contains a single element and (if it is not
provided) will be set as a Tensor containing a single "1".
* **create_graph** (*bool**, **optional*) -- If "True", both the
output and result will be computed in a differentiable way.
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html | pytorch docs |
Note that when "strict" is "False", the result can not require
gradients or be disconnected from the inputs. Defaults to
"False".
* **strict** (*bool**, **optional*) -- If "True", an error will
be raised when we detect that there exists an input such that
all the outputs are independent of it. If "False", we return a
Tensor of zeros as the hvp for said inputs, which is the
expected mathematical value. Defaults to "False".
Returns:
tuple with:
func_output (tuple of Tensors or Tensor): output of
"func(inputs)"
hvp (tuple of Tensors or Tensor): result of the dot product
with the same shape as the inputs.
Return type:
output (tuple)
-[ Example ]-
def pow_reducer(x):
... return x.pow(3).sum()
inputs = torch.rand(2, 2)
v = torch.ones(2, 2)
hvp(pow_reducer, inputs, v)
(tensor(0.1448),
tensor([[2.0239, 1.6456],
[2.4988, 1.4310]]))
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html | pytorch docs |
[2.4988, 1.4310]]))
hvp(pow_reducer, inputs, v, create_graph=True)
(tensor(0.1448, grad_fn=),
tensor([[2.0239, 1.6456],
[2.4988, 1.4310]], grad_fn=))
def pow_adder_reducer(x, y):
... return (2 * x.pow(2) + 3 * y.pow(2)).sum()
inputs = (torch.rand(2), torch.rand(2))
v = (torch.zeros(2), torch.ones(2))
hvp(pow_adder_reducer, inputs, v)
(tensor(2.3030),
(tensor([0., 0.]),
tensor([6., 6.])))
Note:
This function is significantly slower than *vhp* due to backward
mode AD constraints. If your functions is twice continuously
differentiable, then hvp = vhp.t(). So if you know that your
function satisfies this condition, you should use vhp instead
that is much faster with the current implementation.
| https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html | pytorch docs |
torch.Tensor.tril
Tensor.tril(diagonal=0) -> Tensor
See "torch.tril()" | https://pytorch.org/docs/stable/generated/torch.Tensor.tril.html | pytorch docs |
torch.Tensor.lt
Tensor.lt(other) -> Tensor
See "torch.lt()". | https://pytorch.org/docs/stable/generated/torch.Tensor.lt.html | pytorch docs |
torch.Tensor.exp
Tensor.exp() -> Tensor
See "torch.exp()" | https://pytorch.org/docs/stable/generated/torch.Tensor.exp.html | pytorch docs |
torch.nn.functional.conv2d
torch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor
Applies a 2D convolution over an input image composed of several
input planes.
This operator supports TensorFloat32.
See "Conv2d" for details and output shape.
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Note:
This operator supports complex data types i.e. "complex32,
complex64, complex128".
Parameters:
* input -- input tensor of shape (\text{minibatch} ,
\text{in_channels} , iH , iW)
* **weight** -- filters of shape (\text{out\_channels} ,
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html | pytorch docs |
\frac{\text{in_channels}}{\text{groups}} , kH , kW)
* **bias** -- optional bias tensor of shape
(\text{out\_channels}). Default: "None"
* **stride** -- the stride of the convolving kernel. Can be a
single number or a tuple *(sH, sW)*. Default: 1
* **padding** --
implicit paddings on both sides of the input. Can be a string
{'valid', 'same'}, single number or a tuple *(padH, padW)*.
Default: 0 "padding='valid'" is the same as no padding.
"padding='same'" pads the input so the output has the same
shape as the input. However, this mode doesn't support any
stride values other than 1.
Warning:
For "padding='same'", if the "weight" is even-length and
"dilation" is odd in any dimension, a full "pad()" operation
may be needed internally. Lowering performance.
* **dilation** -- the spacing between kernel elements. Can be a
single number or a tuple *(dH, dW)*. Default: 1
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html | pytorch docs |
groups -- split input into groups, \text{in_channels}
should be divisible by the number of groups. Default: 1
Examples:
>>> # With square kernels and equal stride
>>> filters = torch.randn(8, 4, 3, 3)
>>> inputs = torch.randn(1, 4, 5, 5)
>>> F.conv2d(inputs, filters, padding=1)
| https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html | pytorch docs |
torch.func.vmap
torch.func.vmap(func, in_dims=0, out_dims=0, randomness='error', *, chunk_size=None)
vmap is the vectorizing map; "vmap(func)" returns a new function
that maps "func" over some dimension of the inputs. Semantically,
vmap pushes the map into PyTorch operations called by "func",
effectively vectorizing those operations.
vmap is useful for handling batch dimensions: one can write a
function "func" that runs on examples and then lift it to a
function that can take batches of examples with "vmap(func)". vmap
can also be used to compute batched gradients when composed with
autograd.
Note:
"torch.vmap()" is aliased to "torch.func.vmap()" for convenience.
Use whichever one you'd like.
Parameters:
* func (function) -- A Python function that takes one or
more arguments. Must return one or more Tensors.
* **in_dims** (*int** or **nested structure*) -- Specifies which
| https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
dimension of the inputs should be mapped over. "in_dims"
should have a structure like the inputs. If the "in_dim" for a
particular input is None, then that indicates there is no map
dimension. Default: 0.
* **out_dims** (*int** or **Tuple**[**int**]*) -- Specifies
where the mapped dimension should appear in the outputs. If
"out_dims" is a Tuple, then it should have one element per
output. Default: 0.
* **randomness** (*str*) -- Specifies whether the randomness in
this vmap should be the same or different across batches. If
'different', the randomness for each batch will be different.
If 'same', the randomness will be the same across batches. If
'error', any calls to random functions will error. Default:
'error'. WARNING: this flag only applies to random PyTorch
operations and does not apply to Python's random module or
numpy randomness.
| https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
numpy randomness.
* **chunk_size** (*None** or **int*) -- If None (default), apply
a single vmap over inputs. If not None, then compute the vmap
"chunk_size" samples at a time. Note that "chunk_size=1" is
equivalent to computing the vmap with a for-loop. If you run
into memory issues computing the vmap, please try a non-None
chunk_size.
Returns:
Returns a new "batched" function. It takes the same inputs as
"func", except each input has an extra dimension at the index
specified by "in_dims". It takes returns the same outputs as
"func", except each output has an extra dimension at the index
specified by "out_dims".
Return type:
Callable
One example of using "vmap()" is to compute batched dot products.
PyTorch doesn't provide a batched "torch.dot" API; instead of
unsuccessfully rummaging through docs, use "vmap()" to construct a
new function. | https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
new function.
torch.dot # [D], [D] -> []
batched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N]
x, y = torch.randn(2, 5), torch.randn(2, 5)
batched_dot(x, y)
"vmap()" can be helpful in hiding batch dimensions, leading to a
simpler model authoring experience.
batch_size, feature_size = 3, 5
weights = torch.randn(feature_size, requires_grad=True)
def model(feature_vec):
# Very simple linear model with activation
return feature_vec.dot(weights).relu()
examples = torch.randn(batch_size, feature_size)
result = torch.vmap(model)(examples)
"vmap()" can also help vectorize computations that were previously
difficult or impossible to batch. One example is higher-order
gradient computation. The PyTorch autograd engine computes vjps
(vector-Jacobian products). Computing a full Jacobian matrix for
some function f: R^N -> R^N usually requires N calls to | https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
"autograd.grad", one per Jacobian row. Using "vmap()", we can
vectorize the whole computation, computing the Jacobian in a single
call to "autograd.grad".
Setup
N = 5
f = lambda x: x ** 2
x = torch.randn(N, requires_grad=True)
y = f(x)
I_N = torch.eye(N)
Sequential approach
jacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]
for v in I_N.unbind()]
jacobian = torch.stack(jacobian_rows)
vectorized gradient computation
def get_vjp(v):
return torch.autograd.grad(y, x, v)
jacobian = torch.vmap(get_vjp)(I_N)
"vmap()" can also be nested, producing an output with multiple
batched dimensions
torch.dot # [D], [D] -> []
batched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0]
x, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5)
batched_dot(x, y) # tensor of size [2, 3]
| https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
batched_dot(x, y) # tensor of size [2, 3]
If the inputs are not batched along the first dimension, "in_dims"
specifies the dimension that each inputs are batched along as
torch.dot # [N], [N] -> []
batched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D]
x, y = torch.randn(2, 5), torch.randn(2, 5)
batched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension
If there are multiple inputs each of which is batched along
different dimensions, "in_dims" must be a tuple with the batch
dimension for each input as
torch.dot # [D], [D] -> []
batched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N]
x, y = torch.randn(2, 5), torch.randn(5)
batched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None
If the input is a Python struct, "in_dims" must be a tuple | https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
containing a struct matching the shape of the input:
f = lambda dict: torch.dot(dict['x'], dict['y'])
x, y = torch.randn(2, 5), torch.randn(5)
input = {'x': x, 'y': y}
batched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},))
batched_dot(input)
By default, the output is batched along the first dimension.
However, it can be batched along any dimension by using "out_dims"
f = lambda x: x ** 2
x = torch.randn(2, 5)
batched_pow = torch.vmap(f, out_dims=1)
batched_pow(x) # [5, 2]
For any function that uses kwargs, the returned function will not
batch the kwargs but will accept kwargs
x = torch.randn([2, 5])
def fn(x, scale=4.):
return x * scale
batched_pow = torch.vmap(fn)
assert torch.allclose(batched_pow(x), x * 4)
batched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5]
Note:
vmap does not provide general autobatching or handle variable-
| https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
length sequences out of the box. | https://pytorch.org/docs/stable/generated/torch.func.vmap.html | pytorch docs |
torch.Tensor.bernoulli_
Tensor.bernoulli_(p=0.5, *, generator=None) -> Tensor
Fills each location of "self" with an independent sample from
\text{Bernoulli}(\texttt{p}). "self" can have integral "dtype".
"p" should either be a scalar or tensor containing probabilities to
be used for drawing the binary random number.
If it is a tensor, the \text{i}^{th} element of "self" tensor will
be set to a value sampled from
\text{Bernoulli}(\texttt{p_tensor[i]}). In this case p must have
floating point "dtype".
See also "bernoulli()" and "torch.bernoulli()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bernoulli_.html | pytorch docs |
torch.Tensor.is_meta
Tensor.is_meta
Is "True" if the Tensor is a meta tensor, "False" otherwise. Meta
tensors are like normal tensors, but they carry no data. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_meta.html | pytorch docs |
torch.jit.onednn_fusion_enabled
torch.jit.onednn_fusion_enabled()
Returns whether onednn JIT fusion is enabled | https://pytorch.org/docs/stable/generated/torch.jit.onednn_fusion_enabled.html | pytorch docs |
torch.Tensor.absolute_
Tensor.absolute_() -> Tensor
In-place version of "absolute()" Alias for "abs_()" | https://pytorch.org/docs/stable/generated/torch.Tensor.absolute_.html | pytorch docs |
torch.logaddexp2
torch.logaddexp2(input, other, *, out=None) -> Tensor
Logarithm of the sum of exponentiations of the inputs in base-2.
Calculates pointwise \log_2\left(2^x + 2^y\right). See
"torch.logaddexp()" for more details.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor. | https://pytorch.org/docs/stable/generated/torch.logaddexp2.html | pytorch docs |
torch.cuda.memory_snapshot
torch.cuda.memory_snapshot()
Returns a snapshot of the CUDA memory allocator state across all
devices.
Interpreting the output of this function requires familiarity with
the memory allocator internals.
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.memory_snapshot.html | pytorch docs |
torch.Tensor.sigmoid
Tensor.sigmoid() -> Tensor
See "torch.sigmoid()" | https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid.html | pytorch docs |
LazyInstanceNorm2d
class torch.nn.LazyInstanceNorm2d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
A "torch.nn.InstanceNorm2d" module with lazy initialization of the
"num_features" argument of the "InstanceNorm2d" that is inferred
from the "input.size(1)". The attributes that will be lazily
initialized are weight, bias, running_mean and running_var.
Check the "torch.nn.modules.lazy.LazyModuleMixin" for further
documentation on lazy modules and their limitations.
Parameters:
* num_features -- C from an expected input of size (N, C, H,
W) or (C, H, W)
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
| https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm2d.html | pytorch docs |
"True", this module has learnable affine parameters,
initialized the same way as done for batch normalization.
Default: "False".
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics and always uses batch statistics in both
training and eval modes. Default: "False"
Shape:
* Input: (N, C, H, W) or (C, H, W)
* Output: (N, C, H, W) or (C, H, W) (same shape as input)
cls_to_become
alias of "InstanceNorm2d"
| https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm2d.html | pytorch docs |
torch.isreal
torch.isreal(input) -> Tensor
Returns a new tensor with boolean elements representing if each
element of "input" is real-valued or not. All real-valued types are
considered real. Complex values are considered real when their
imaginary part is 0.
Parameters:
input (Tensor) -- the input tensor.
Returns:
A boolean tensor that is True where "input" is real and False
elsewhere
Example:
>>> torch.isreal(torch.tensor([1, 1+1j, 2+0j]))
tensor([True, False, True])
| https://pytorch.org/docs/stable/generated/torch.isreal.html | pytorch docs |
TransformerEncoderLayer
class torch.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)
TransformerEncoderLayer is made up of self-attn and feedforward
network. This standard encoder layer is based on the paper
"Attention Is All You Need". Ashish Vaswani, Noam Shazeer, Niki
Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser,
and Illia Polosukhin. 2017. Attention is all you need. In Advances
in Neural Information Processing Systems, pages 6000-6010. Users
may modify or implement in a different way during application.
Parameters:
* d_model (int) -- the number of expected features in the
input (required).
* **nhead** (*int*) -- the number of heads in the
multiheadattention models (required).
* **dim_feedforward** (*int*) -- the dimension of the
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html | pytorch docs |
feedforward network model (default=2048).
* **dropout** (*float*) -- the dropout value (default=0.1).
* **activation** (*Union**[**str**,
**Callable**[**[**Tensor**]**, **Tensor**]**]*) -- the
activation function of the intermediate layer, can be a string
("relu" or "gelu") or a unary callable. Default: relu
* **layer_norm_eps** (*float*) -- the eps value in layer
normalization components (default=1e-5).
* **batch_first** (*bool*) -- If "True", then the input and
output tensors are provided as (batch, seq, feature). Default:
"False" (seq, batch, feature).
* **norm_first** (*bool*) -- if "True", layer norm is done prior
to attention and feedforward operations, respectively.
Otherwise it's done after. Default: "False" (after).
Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)
>>> src = torch.rand(10, 32, 512)
>>> out = encoder_layer(src) | https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html | pytorch docs |
out = encoder_layer(src)
Alternatively, when "batch_first" is "True":
>>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True)
>>> src = torch.rand(32, 10, 512)
>>> out = encoder_layer(src)
Fast path:
forward() will use a special optimized implementation if all of
the following conditions are met:
* Either autograd is disabled (using "torch.inference_mode" or
"torch.no_grad") or no tensor argument "requires_grad"
* training is disabled (using ".eval()")
* batch_first is "True" and the input is batched (i.e.,
"src.dim() == 3")
* activation is one of: ""relu"", ""gelu"",
"torch.functional.relu", or "torch.functional.gelu"
* at most one of "src_mask" and "src_key_padding_mask" is passed
* if src is a NestedTensor, neither "src_mask" nor
"src_key_padding_mask" is passed
* the two "LayerNorm" instances have a consistent "eps" value
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html | pytorch docs |
(this will naturally be the case unless the caller has
manually modified one without modifying the other)
If the optimized implementation is in use, a NestedTensor can be
passed for "src" to represent padding more efficiently than
using a padding mask. In this case, a NestedTensor will be
returned, and an additional speedup proportional to the fraction
of the input that is padding can be expected.
forward(src, src_mask=None, src_key_padding_mask=None, is_causal=False)
Pass the input through the encoder layer.
Parameters:
* **src** (*Tensor*) -- the sequence to the encoder layer
(required).
* **src_mask** (*Optional**[**Tensor**]*) -- the mask for the
src sequence (optional).
* **is_causal** (*bool*) -- If specified, applies a causal
mask as src_mask. Mutually exclusive with providing
src_mask. Default: "False".
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html | pytorch docs |
src_mask. Default: "False".
* **src_key_padding_mask** (*Optional**[**Tensor**]*) -- the
mask for the src keys per batch (optional).
Return type:
*Tensor*
Shape:
see the docs in Transformer class.
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html | pytorch docs |
MaxUnpool3d
class torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0)
Computes a partial inverse of "MaxPool3d".
"MaxPool3d" is not fully invertible, since the non-maximal values
are lost. "MaxUnpool3d" takes in as input the output of "MaxPool3d"
including the indices of the maximal values and computes a partial
inverse in which all non-maximal values are set to zero.
Note:
"MaxPool3d" can map several input sizes to the same output sizes.
Hence, the inversion process can get ambiguous. To accommodate
this, you can provide the needed output size as an additional
argument "output_size" in the forward call. See the Inputs
section below.
Parameters:
* kernel_size (int or tuple) -- Size of the max
pooling window.
* **stride** (*int** or **tuple*) -- Stride of the max pooling
window. It is set to "kernel_size" by default.
* **padding** (*int** or **tuple*) -- Padding that was added to
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html | pytorch docs |
the input
Inputs:
* input: the input Tensor to invert
* *indices*: the indices given out by "MaxPool3d"
* *output_size* (optional): the targeted output size
Shape:
* Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},
W_{in}).
* Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},
H_{out}, W_{out}), where
D_{out} = (D_{in} - 1) \times \text{stride[0]} - 2 \times
\text{padding[0]} + \text{kernel\_size[0]}
H_{out} = (H_{in} - 1) \times \text{stride[1]} - 2 \times
\text{padding[1]} + \text{kernel\_size[1]}
W_{out} = (W_{in} - 1) \times \text{stride[2]} - 2 \times
\text{padding[2]} + \text{kernel\_size[2]}
or as given by "output_size" in the call operator
Example:
>>> # pool of square window of size=3, stride=2
>>> pool = nn.MaxPool3d(3, stride=2, return_indices=True)
>>> unpool = nn.MaxUnpool3d(3, stride=2)
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html | pytorch docs |
unpool = nn.MaxUnpool3d(3, stride=2)
>>> output, indices = pool(torch.randn(20, 16, 51, 33, 15))
>>> unpooled_output = unpool(output, indices)
>>> unpooled_output.size()
torch.Size([20, 16, 51, 33, 15])
| https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html | pytorch docs |
torch.Tensor.is_leaf
Tensor.is_leaf
All Tensors that have "requires_grad" which is "False" will be leaf
Tensors by convention.
For Tensors that have "requires_grad" which is "True", they will be
leaf Tensors if they were created by the user. This means that they
are not the result of an operation and so "grad_fn" is None.
Only leaf Tensors will have their "grad" populated during a call to
"backward()". To get "grad" populated for non-leaf Tensors, you can
use "retain_grad()".
Example:
>>> a = torch.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
# c was created by the addition operation
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
| https://pytorch.org/docs/stable/generated/torch.Tensor.is_leaf.html | pytorch docs |
d.is_leaf
True
# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
# e requires gradients and has no operations creating it
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
# f requires grad, has no operation creating it
| https://pytorch.org/docs/stable/generated/torch.Tensor.is_leaf.html | pytorch docs |
torch.jit.wait
torch.jit.wait(future)
Forces completion of a torch.jit.Future[T] asynchronous task,
returning the result of the task. See "fork()" for docs and
examples. :param future: an asynchronous task reference, created
through torch.jit.fork :type future: torch.jit.Future[T]
Returns:
the return value of the the completed task
Return type:
T | https://pytorch.org/docs/stable/generated/torch.jit.wait.html | pytorch docs |
torch.Tensor.scatter_add
Tensor.scatter_add(dim, index, src) -> Tensor
Out-of-place version of "torch.Tensor.scatter_add_()" | https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add.html | pytorch docs |
torch.Tensor.reshape
Tensor.reshape(*shape) -> Tensor
Returns a tensor with the same data and number of elements as
"self" but with the specified shape. This method returns a view if
"shape" is compatible with the current shape. See
"torch.Tensor.view()" on when it is possible to return a view.
See "torch.reshape()"
Parameters:
shape (tuple of ints or int...) -- the desired shape | https://pytorch.org/docs/stable/generated/torch.Tensor.reshape.html | pytorch docs |
ObserverBase
class torch.quantization.observer.ObserverBase(dtype)
Base observer Module. Any observer implementation should derive
from this class.
Concrete observers should follow the same API. In forward, they
will update the statistics of the observed Tensor. And they should
provide a calculate_qparams function that computes the
quantization parameters given the collected statistics.
Parameters:
dtype -- dtype argument to the quantize node needed to
implement the reference model spec.
classmethod with_args(**kwargs)
Wrapper that allows creation of class factories.
This can be useful when there is a need to create classes with
the same constructor arguments, but different instances. Can be
used in conjunction with _callable_args
Example:
>>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)
>>> foo_instance1 = foo_builder()
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.ObserverBase.html | pytorch docs |
foo_instance1 = foo_builder()
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1) == id(foo_instance2)
False
classmethod with_callable_args(**kwargs)
Wrapper that allows creation of class factories args that need
to be called at construction time.
This can be useful when there is a need to create classes with
the same constructor arguments, but different instances and
those arguments should only be calculated at construction time.
Can be used in conjunction with _with_args
Example:
>>> Foo.with_callable_args = classmethod(_with_callable_args)
>>> Foo.with_args = classmethod(_with_args)
>>> foo_builder = Foo.with_callable_args(cur_time=get_time_func).with_args(name="dan")
>>> foo_instance1 = foo_builder()
>>> # wait 50
>>> foo_instance2 = foo_builder()
>>> id(foo_instance1.creation_time) == id(foo_instance2.creation_time)
False
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.ObserverBase.html | pytorch docs |
torch.Tensor.igamma_
Tensor.igamma_(other) -> Tensor
In-place version of "igamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.igamma_.html | pytorch docs |
torch.Tensor.log10
Tensor.log10() -> Tensor
See "torch.log10()" | https://pytorch.org/docs/stable/generated/torch.Tensor.log10.html | pytorch docs |
torch.cuda.can_device_access_peer
torch.cuda.can_device_access_peer(device, peer_device)
Checks if peer access between two devices is possible.
Return type:
bool | https://pytorch.org/docs/stable/generated/torch.cuda.can_device_access_peer.html | pytorch docs |
torch.linalg.det
torch.linalg.det(A, *, out=None) -> Tensor
Computes the determinant of a square matrix.
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if "A" is a batch of matrices
then the output has the same batch dimensions.
See also:
"torch.linalg.slogdet()" computes the sign and natural logarithm
of the absolute value of the determinant of square matrices.
Parameters:
A (Tensor) -- tensor of shape (, n, n)* where *** is
zero or more batch dimensions.
Keyword Arguments:
out (Tensor, optional) -- output tensor. Ignored if
None. Default: None.
Examples:
>>> A = torch.randn(3, 3)
>>> torch.linalg.det(A)
tensor(0.0934)
>>> A = torch.randn(3, 2, 2)
>>> torch.linalg.det(A)
tensor([1.1990, 0.4099, 0.7386])
| https://pytorch.org/docs/stable/generated/torch.linalg.det.html | pytorch docs |
TripletMarginWithDistanceLoss
class torch.nn.TripletMarginWithDistanceLoss(*, distance_function=None, margin=1.0, swap=False, reduction='mean')
Creates a criterion that measures the triplet loss given input
tensors a, p, and n (representing anchor, positive, and negative
examples, respectively), and a nonnegative, real-valued function
("distance function") used to compute the relationship between the
anchor and positive example ("positive distance") and the anchor
and negative example ("negative distance").
The unreduced loss (i.e., with "reduction" set to "'none'") can be
described as:
\ell(a, p, n) = L = \{l_1,\dots,l_N\}^\top, \quad l_i = \max
\{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}
where N is the batch size; d is a nonnegative, real-valued function
quantifying the closeness of two tensors, referred to as the
"distance_function"; and margin is a nonnegative margin | https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html | pytorch docs |
representing the minimum difference between the positive and
negative distances that is required for the loss to be 0. The
input tensors have N elements each and can be of any shape that the
distance function can handle.
If "reduction" is not "'none'" (default "'mean'"), then:
\ell(x, y) = \begin{cases} \operatorname{mean}(L), &
\text{if reduction} = \text{`mean';}\\
\operatorname{sum}(L), & \text{if reduction} = \text{`sum'.}
\end{cases}
See also "TripletMarginLoss", which computes the triplet loss for
input tensors using the l_p distance as the distance function.
Parameters:
* distance_function (Callable, optional) -- A
nonnegative, real-valued function that quantifies the
closeness of two tensors. If not specified,
nn.PairwiseDistance will be used. Default: "None"
* **margin** (*float**, **optional*) -- A nonnegative margin
representing the minimum difference between the positive and
| https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html | pytorch docs |
negative distances required for the loss to be 0. Larger
margins penalize cases where the negative examples are not
distant enough from the anchors, relative to the positives.
Default: 1.
* **swap** (*bool**, **optional*) -- Whether to use the distance
swap described in the paper *Learning shallow convolutional
feature descriptors with triplet losses* by V. Balntas, E.
Riba et al. If True, and if the positive example is closer to
the negative example than the anchor is, swaps the positive
example and the anchor in the loss computation. Default:
"False".
* **reduction** (*str**, **optional*) -- Specifies the
(optional) reduction to apply to the output: "'none'" |
"'mean'" | "'sum'". "'none'": no reduction will be applied,
"'mean'": the sum of the output will be divided by the number
of elements in the output, "'sum'": the output will be summed.
Default: "'mean'"
Shape: | https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html | pytorch docs |
Default: "'mean'"
Shape:
* Input: (N, *) where * represents any number of additional
dimensions as supported by the distance function.
* Output: A Tensor of shape (N) if "reduction" is "'none'", or a
scalar otherwise.
Examples:
>>> # Initialize embeddings
>>> embedding = nn.Embedding(1000, 128)
>>> anchor_ids = torch.randint(0, 1000, (1,))
>>> positive_ids = torch.randint(0, 1000, (1,))
>>> negative_ids = torch.randint(0, 1000, (1,))
>>> anchor = embedding(anchor_ids)
>>> positive = embedding(positive_ids)
>>> negative = embedding(negative_ids)
>>>
>>> # Built-in Distance Function
>>> triplet_loss = \
>>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance())
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
>>>
>>> # Custom Distance Function
>>> def l_infinity(x1, x2):
| https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html | pytorch docs |
def l_infinity(x1, x2):
>>> return torch.max(torch.abs(x1 - x2), dim=1).values
>>>
>>> triplet_loss = (
>>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5))
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
>>>
>>> # Custom Distance Function (Lambda)
>>> triplet_loss = (
>>> nn.TripletMarginWithDistanceLoss(
>>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y)))
>>> output = triplet_loss(anchor, positive, negative)
>>> output.backward()
Reference:
V. Balntas, et al.: Learning shallow convolutional feature
descriptors with triplet losses:
http://www.bmva.org/bmvc/2016/papers/paper119/index.html | https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html | pytorch docs |
torch.rot90
torch.rot90(input, k=1, dims=[0, 1]) -> Tensor
Rotate an n-D tensor by 90 degrees in the plane specified by dims
axis. Rotation direction is from the first towards the second axis
if k > 0, and from the second towards the first for k < 0.
Parameters:
* input (Tensor) -- the input tensor.
* **k** (*int*) -- number of times to rotate. Default value is 1
* **dims** (*a list** or **tuple*) -- axis to rotate. Default
value is [0, 1]
Example:
>>> x = torch.arange(4).view(2, 2)
>>> x
tensor([[0, 1],
[2, 3]])
>>> torch.rot90(x, 1, [0, 1])
tensor([[1, 3],
[0, 2]])
>>> x = torch.arange(8).view(2, 2, 2)
>>> x
tensor([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
>>> torch.rot90(x, 1, [1, 2])
tensor([[[1, 3],
[0, 2]],
[[5, 7],
[4, 6]]])
| https://pytorch.org/docs/stable/generated/torch.rot90.html | pytorch docs |
torch.nn.utils.prune.global_unstructured
torch.nn.utils.prune.global_unstructured(parameters, pruning_method, importance_scores=None, **kwargs)
Globally prunes tensors corresponding to all parameters in
"parameters" by applying the specified "pruning_method". Modifies
modules in place by:
adding a named buffer called "name+'_mask'" corresponding to the
binary mask applied to the parameter "name" by the pruning
method.
replacing the parameter "name" by its pruned version, while the
original (unpruned) parameter is stored in a new parameter named
"name+'_orig'".
Parameters:
* parameters (Iterable of (module, name)
tuples) -- parameters of the model to prune in a global
fashion, i.e. by aggregating all weights prior to deciding
which ones to prune. module must be of type "nn.Module", and
name must be a string. | https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html | pytorch docs |
name must be a string.
* **pruning_method** (*function*) -- a valid pruning function
from this module, or a custom one implemented by the user that
satisfies the implementation guidelines and has
"PRUNING_TYPE='unstructured'".
* **importance_scores** (*dict*) -- a dictionary mapping
(module, name) tuples to the corresponding parameter's
importance scores tensor. The tensor should be the same shape
as the parameter, and is used for computing mask for pruning.
If unspecified or None, the parameter will be used in place of
its importance scores.
* **kwargs** -- other keyword arguments such as: amount (int or
float): quantity of parameters to prune across the specified
parameters. If "float", should be between 0.0 and 1.0 and
represent the fraction of parameters to prune. If "int", it
represents the absolute number of parameters to prune.
Raises: | https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html | pytorch docs |
Raises:
TypeError -- if "PRUNING_TYPE != 'unstructured'"
Note:
Since global structured pruning doesn't make much sense unless
the norm is normalized by the size of the parameter, we now limit
the scope of global pruning to unstructured methods.
-[ Examples ]-
from torch.nn.utils import prune
from collections import OrderedDict
net = nn.Sequential(OrderedDict([
... ('first', nn.Linear(10, 4)),
... ('second', nn.Linear(4, 1)),
... ]))
parameters_to_prune = (
... (net.first, 'weight'),
... (net.second, 'weight'),
... )
prune.global_unstructured(
... parameters_to_prune,
... pruning_method=prune.L1Unstructured,
... amount=10,
... )
print(sum(torch.nn.utils.parameters_to_vector(net.buffers()) == 0))
tensor(10)
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html | pytorch docs |
TransformerDecoderLayer
class torch.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)
TransformerDecoderLayer is made up of self-attn, multi-head-attn
and feedforward network. This standard decoder layer is based on
the paper "Attention Is All You Need". Ashish Vaswani, Noam
Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you
need. In Advances in Neural Information Processing Systems, pages
6000-6010. Users may modify or implement in a different way during
application.
Parameters:
* d_model (int) -- the number of expected features in the
input (required).
* **nhead** (*int*) -- the number of heads in the
multiheadattention models (required).
* **dim_feedforward** (*int*) -- the dimension of the
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html | pytorch docs |
feedforward network model (default=2048).
* **dropout** (*float*) -- the dropout value (default=0.1).
* **activation** (*Union**[**str**,
**Callable**[**[**Tensor**]**, **Tensor**]**]*) -- the
activation function of the intermediate layer, can be a string
("relu" or "gelu") or a unary callable. Default: relu
* **layer_norm_eps** (*float*) -- the eps value in layer
normalization components (default=1e-5).
* **batch_first** (*bool*) -- If "True", then the input and
output tensors are provided as (batch, seq, feature). Default:
"False" (seq, batch, feature).
* **norm_first** (*bool*) -- if "True", layer norm is done prior
to self attention, multihead attention and feedforward
operations, respectively. Otherwise it's done after. Default:
"False" (after).
Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> memory = torch.rand(10, 32, 512) | https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html | pytorch docs |
memory = torch.rand(10, 32, 512)
>>> tgt = torch.rand(20, 32, 512)
>>> out = decoder_layer(tgt, memory)
Alternatively, when "batch_first" is "True":
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True)
>>> memory = torch.rand(32, 10, 512)
>>> tgt = torch.rand(32, 20, 512)
>>> out = decoder_layer(tgt, memory)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, tgt_is_causal=False, memory_is_causal=False)
Pass the inputs (and mask) through the decoder layer.
Parameters:
* **tgt** (*Tensor*) -- the sequence to the decoder layer
(required).
* **memory** (*Tensor*) -- the sequence from the last layer
of the encoder (required).
* **tgt_mask** (*Optional**[**Tensor**]*) -- the mask for the
tgt sequence (optional).
* **memory_mask** (*Optional**[**Tensor**]*) -- the mask for
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html | pytorch docs |
the memory sequence (optional).
* **tgt_key_padding_mask** (*Optional**[**Tensor**]*) -- the
mask for the tgt keys per batch (optional).
* **memory_key_padding_mask** (*Optional**[**Tensor**]*) --
the mask for the memory keys per batch (optional).
* **tgt_is_causal** (*bool*) -- If specified, applies a
causal mask as tgt mask. Mutually exclusive with providing
tgt_mask. Default: "False".
* **memory_is_causal** (*bool*) -- If specified, applies a
causal mask as tgt mask. Mutually exclusive with providing
memory_mask. Default: "False".
Return type:
*Tensor*
Shape:
see the docs in Transformer class.
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html | pytorch docs |
torch.randint_like
torch.randint_like(input, low=0, high, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor
Returns a tensor with the same shape as Tensor "input" filled with
random integers generated uniformly between "low" (inclusive) and
"high" (exclusive).
Parameters:
* input (Tensor) -- the size of "input" will determine
size of the output tensor.
* **low** (*int**, **optional*) -- Lowest integer to be drawn
from the distribution. Default: 0.
* **high** (*int*) -- One above the highest integer to be drawn
from the distribution.
Keyword Arguments:
* dtype ("torch.dtype", optional) -- the desired data type
of returned Tensor. Default: if "None", defaults to the dtype
of "input".
* **layout** ("torch.layout", optional) -- the desired layout of
returned tensor. Default: if "None", defaults to the layout of
| https://pytorch.org/docs/stable/generated/torch.randint_like.html | pytorch docs |
"input".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", defaults to the device of
"input".
* **requires_grad** (*bool**, **optional*) -- If autograd should
record operations on the returned tensor. Default: "False".
* **memory_format** ("torch.memory_format", optional) -- the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
| https://pytorch.org/docs/stable/generated/torch.randint_like.html | pytorch docs |
torch.Tensor.masked_select
Tensor.masked_select(mask) -> Tensor
See "torch.masked_select()" | https://pytorch.org/docs/stable/generated/torch.Tensor.masked_select.html | pytorch docs |
torch.Tensor.bernoulli
Tensor.bernoulli(*, generator=None) -> Tensor
Returns a result tensor where each \texttt{result[i]} is
independently sampled from \text{Bernoulli}(\texttt{self[i]}).
"self" must have floating point "dtype", and the result will have
the same "dtype".
See "torch.bernoulli()" | https://pytorch.org/docs/stable/generated/torch.Tensor.bernoulli.html | pytorch docs |
torch.fft.fftfreq
torch.fft.fftfreq(n, d=1.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor
Computes the discrete Fourier Transform sample frequencies for a
signal of size "n".
Note:
By convention, "fft()" returns positive frequency terms first,
followed by the negative frequencies in reverse order, so that
"f[-i]" for all 0 < i \leq n/2` in Python gives the negative
frequency terms. For an FFT of length "n" and with inputs spaced
in length unit "d", the frequencies are:
f = [0, 1, ..., (n - 1) // 2, -(n // 2), ..., -1] / (d * n)
Note:
For even lengths, the Nyquist frequency at "f[n/2]" can be
thought of as either negative or positive. "fftfreq()" follows
NumPy's convention of taking it to be negative.
Parameters:
* n (int) -- the FFT length
* **d** (*float**, **optional*) -- The sampling length scale.
| https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html | pytorch docs |
The spacing between individual samples of the FFT input. The
default assumes unit spacing, dividing that result by the
actual spacing gives the result in physical frequency units.
Keyword Arguments:
* out (Tensor, optional) -- the output tensor.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. Default: if "None", uses a global default
(see "torch.set_default_tensor_type()").
* **layout** ("torch.layout", optional) -- the desired layout of
returned Tensor. Default: "torch.strided".
* **device** ("torch.device", optional) -- the desired device of
returned tensor. Default: if "None", uses the current device
for the default tensor type (see
"torch.set_default_tensor_type()"). "device" will be the CPU
for CPU tensor types and the current CUDA device for CUDA
tensor types.
* **requires_grad** (*bool**, **optional*) -- If autograd should
| https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html | pytorch docs |
record operations on the returned tensor. Default: "False".
-[ Example ]-
torch.fft.fftfreq(5)
tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])
For even input, we can see the Nyquist frequency at "f[2]" is given
as negative:
torch.fft.fftfreq(4)
tensor([ 0.0000, 0.2500, -0.5000, -0.2500])
| https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html | pytorch docs |
torch.Tensor.broadcast_to
Tensor.broadcast_to(shape) -> Tensor
See "torch.broadcast_to()". | https://pytorch.org/docs/stable/generated/torch.Tensor.broadcast_to.html | pytorch docs |
torch.cuda.nvtx.range_push
torch.cuda.nvtx.range_push(msg)
Pushes a range onto a stack of nested range span. Returns zero-
based depth of the range that is started.
Parameters:
msg (str) -- ASCII message to associate with range | https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.range_push.html | pytorch docs |
GELU
class torch.nn.GELU(approximate='none')
Applies the Gaussian Error Linear Units function:
\text{GELU}(x) = x * \Phi(x)
where \Phi(x) is the Cumulative Distribution Function for Gaussian
Distribution.
When the approximate argument is 'tanh', Gelu is estimated with:
\text{GELU}(x) = 0.5 * x * (1 + \text{Tanh}(\sqrt(2 / \pi) * (x
+ 0.044715 * x^3)))
Parameters:
approximate (str, optional) -- the gelu approximation
algorithm to use: "'none'" | "'tanh'". Default: "'none'"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.GELU()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.GELU.html | pytorch docs |
torch.func.functionalize
torch.func.functionalize(func, *, remove='mutations')
functionalize is a transform that can be used to remove
(intermediate) mutations and aliasing from a function, while
preserving the function's semantics.
"functionalize(func)" returns a new function with the same
semantics as "func", but with all intermediate mutations removed.
Every inplace operation performed on an intermediate tensor:
"intermediate.foo_()" gets replaced by its out-of-place equivalent:
"intermediate_updated = intermediate.foo()".
functionalize is useful for shipping a pytorch program off to
backends or compilers that aren't able to easily represent
mutations or aliasing operators.
Parameters:
* func (Callable) -- A Python function that takes one or
more arguments.
* **remove** (*str*) -- An optional string argument, that takes
on either the value 'mutations' or 'mutations_and_views'. If
| https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
'mutations' is passed in then all mutating operators will be
replaced with their non-mutating equivalents. If
'mutations_and_views' is passed in, then additionally, all
aliasing operators will be replaced with their non-aliasing
equivalents. Default: 'mutations'.
Returns:
Returns a new "functionalized" function. It takes the same
inputs as "func", and has the same behavior, but any mutations
(and optionally aliasing) performed on intermeidate tensors in
the function will be removed.
Return type:
Callable
functionalize will also remove mutations (and views) that were
performed on function inputs. However to preserve semantics,
functionalize will "fix up" the mutations after the transform has
finished running, by detecting if any tensor inputs "should have"
been mutated, and copying the new data back to the inputs if
necessary.
Example:
>>> import torch
| https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
necessary.
Example:
>>> import torch
>>> from torch.fx.experimental.proxy_tensor import make_fx
>>> from torch.func import functionalize
>>>
>>> # A function that uses mutations and views, but only on intermediate tensors.
>>> def f(a):
... b = a + 1
... c = b.view(-1)
... c.add_(1)
... return b
...
>>> inpt = torch.randn(2)
>>>
>>> out1 = f(inpt)
>>> out2 = functionalize(f)(inpt)
>>>
>>> # semantics are the same (outputs are equivalent)
>>> print(torch.allclose(out1, out2))
True
>>>
>>> f_traced = make_fx(f)(inpt)
>>> f_no_mutations_traced = make_fx(functionalize(f))(inpt)
>>> f_no_mutations_and_views_traced = make_fx(functionalize(f, remove='mutations_and_views'))(inpt)
>>>
>>> print(f_traced.code)
def forward(self, a_1):
add = torch.ops.aten.add(a_1, 1); a_1 = None
view = torch.ops.aten.view(add, [-1])
| https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
view = torch.ops.aten.view(add, [-1])
add_ = torch.ops.aten.add_(view, 1); view = None
return add
>>> print(f_no_mutations_traced.code)
def forward(self, a_1):
add = torch.ops.aten.add(a_1, 1); a_1 = None
view = torch.ops.aten.view(add, [-1]); add = None
add_1 = torch.ops.aten.add(view, 1); view = None
view_1 = torch.ops.aten.view(add_1, [2]); add_1 = None
return view_1
>>> print(f_no_mutations_and_views_traced.code)
def forward(self, a_1):
add = torch.ops.aten.add(a_1, 1); a_1 = None
view_copy = torch.ops.aten.view_copy(add, [-1]); add = None
add_1 = torch.ops.aten.add(view_copy, 1); view_copy = None
view_copy_1 = torch.ops.aten.view_copy(add_1, [2]); add_1 = None
return view_copy_1
>>> # A function that mutates its input tensor
>>> def f(a):
... b = a.view(-1)
... b.add_(1)
... return a
...
| https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
... return a
...
>>> f_no_mutations_and_views_traced = make_fx(functionalize(f, remove='mutations_and_views'))(inpt)
>>> #
>>> # All mutations and views have been removed,
>>> # but there is an extra copy_ in the graph to correctly apply the mutation to the input
>>> # after the function has completed.
>>> print(f_no_mutations_and_views_traced.code)
def forward(self, a_1):
view_copy = torch.ops.aten.view_copy(a_1, [-1])
add = torch.ops.aten.add(view_copy, 1); view_copy = None
view_copy_1 = torch.ops.aten.view_copy(add, [2]); add = None
copy_ = torch.ops.aten.copy_(a_1, view_copy_1); a_1 = None
return view_copy_1
There are a few "failure modes" for functionalize that are worth
calling out:
1. Like other torch.func transforms, functionalize() doesn't
work with functions that directly use .backward(). The same | https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
is true for torch.autograd.grad. If you want to use autograd,
you can compute gradients directly with
functionalize(grad(f)).
2. Like other torch.func transforms, *functionalize()* doesn't
work with global state. If you call *functionalize(f)* on a
function that takes views / mutations of non-local state,
functionalization will simply no-op and pass the
view/mutation calls directly to the backend. One way to work
around this is is to ensure that any non-local state creation
is wrapped into a larger function, which you then call
functionalize on.
3. *resize_()* has some limitations: functionalize will only
work on programs that use resize_()` as long as the tensor
being resized is not a view.
4. *as_strided()* has some limitations: functionalize will not
work on *as_strided()* calls that result in tensors with
overlapping memory.
| https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
overlapping memory.
Finally, a helpful mental model for understanding functionalization
is that most user pytorch programs are writting with the public
torch API. When executed, torch operators are generally decomposed
into our internal C++ "ATen" API. The logic for functionalization
happens entirely at the level of ATen. Functionalization knows how
to take every aliasing operator in ATen, and map it to its non-
aliasing equivalent (e.g. "tensor.view({-1})" ->
"at::view_copy(tensor, {-1})"), and how to take every mutating
operator in ATen, and map it to its non-mutating equivalent (e.g.
"tensor.add_(1)" -> "at::add(tensor, -1)"), while tracking aliases
and mutations out-of-line to know when to fix things up.
Information about which ATen operators are aliasing or mutating all
comes from https://github.com/pytorch/pytorch/blob/master/aten/src
/ATen/native/native_functions.yaml. | https://pytorch.org/docs/stable/generated/torch.func.functionalize.html | pytorch docs |
torch.bernoulli
torch.bernoulli(input, *, generator=None, out=None) -> Tensor
Draws binary random numbers (0 or 1) from a Bernoulli distribution.
The "input" tensor should be a tensor containing probabilities to
be used for drawing the binary random number. Hence, all values in
"input" have to be in the range: 0 \leq \text{input}_i \leq 1.
The \text{i}^{th} element of the output tensor will draw a value 1
according to the \text{i}^{th} probability value given in "input".
\text{out}_{i} \sim \mathrm{Bernoulli}(p = \text{input}_{i})
The returned "out" tensor only has values 0 or 1 and is of the same
shape as "input".
"out" can have integral "dtype", but "input" must have floating
point "dtype".
Parameters:
input (Tensor) -- the input tensor of probability values
for the Bernoulli distribution
Keyword Arguments:
* generator ("torch.Generator", optional) -- a pseudorandom
number generator for sampling | https://pytorch.org/docs/stable/generated/torch.bernoulli.html | pytorch docs |
number generator for sampling
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1]
>>> a
tensor([[ 0.1737, 0.0950, 0.3609],
[ 0.7148, 0.0289, 0.2676],
[ 0.9456, 0.8937, 0.7202]])
>>> torch.bernoulli(a)
tensor([[ 1., 0., 0.],
[ 0., 0., 0.],
[ 1., 1., 1.]])
>>> a = torch.ones(3, 3) # probability of drawing "1" is 1
>>> torch.bernoulli(a)
tensor([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
>>> a = torch.zeros(3, 3) # probability of drawing "1" is 0
>>> torch.bernoulli(a)
tensor([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
| https://pytorch.org/docs/stable/generated/torch.bernoulli.html | pytorch docs |
torch.minimum
torch.minimum(input, other, *, out=None) -> Tensor
Computes the element-wise minimum of "input" and "other".
Note:
If one of the elements being compared is a NaN, then that element
is returned. "minimum()" is not supported for tensors with
complex dtypes.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the second input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.tensor((1, 2, -1))
>>> b = torch.tensor((3, 0, 4))
>>> torch.minimum(a, b)
tensor([1, 0, -1])
| https://pytorch.org/docs/stable/generated/torch.minimum.html | pytorch docs |
torch.logical_and
torch.logical_and(input, other, *, out=None) -> Tensor
Computes the element-wise logical AND of the given input tensors.
Zeros are treated as "False" and nonzeros are treated as "True".
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor*) -- the tensor to compute AND with
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.logical_and(torch.tensor([True, False, True]), torch.tensor([True, False, False]))
tensor([ True, False, False])
>>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)
>>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)
>>> torch.logical_and(a, b)
tensor([False, False, True, False])
>>> torch.logical_and(a.double(), b.double())
tensor([False, False, True, False])
>>> torch.logical_and(a.double(), b)
tensor([False, False, True, False])
| https://pytorch.org/docs/stable/generated/torch.logical_and.html | pytorch docs |
tensor([False, False, True, False])
>>> torch.logical_and(a, b, out=torch.empty(4, dtype=torch.bool))
tensor([False, False, True, False]) | https://pytorch.org/docs/stable/generated/torch.logical_and.html | pytorch docs |
CELU
class torch.nn.CELU(alpha=1.0, inplace=False)
Applies the element-wise function:
\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) -
1))
More details can be found in the paper Continuously Differentiable
Exponential Linear Units .
Parameters:
* alpha (float) -- the \alpha value for the CELU
formulation. Default: 1.0
* **inplace** (*bool*) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.CELU()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.CELU.html | pytorch docs |
TransformerDecoder
class torch.nn.TransformerDecoder(decoder_layer, num_layers, norm=None)
TransformerDecoder is a stack of N decoder layers
Parameters:
* decoder_layer -- an instance of the
TransformerDecoderLayer() class (required).
* **num_layers** -- the number of sub-decoder-layers in the
decoder (required).
* **norm** -- the layer normalization component (optional).
Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)
>>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)
>>> memory = torch.rand(10, 32, 512)
>>> tgt = torch.rand(20, 32, 512)
>>> out = transformer_decoder(tgt, memory)
forward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)
Pass the inputs (and mask) through the decoder layer in turn.
Parameters:
* **tgt** (*Tensor*) -- the sequence to the decoder
| https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.