text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
torch.Tensor.t_
Tensor.t_() -> Tensor
In-place version of "t()" | https://pytorch.org/docs/stable/generated/torch.Tensor.t_.html | pytorch docs |
torch.Tensor.cholesky
Tensor.cholesky(upper=False) -> Tensor
See "torch.cholesky()" | https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky.html | pytorch docs |
LSTMCell
class torch.nn.LSTMCell(input_size, hidden_size, bias=True, device=None, dtype=None)
A long short-term memory (LSTM) cell.
\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h +
b_{hi}) \\ f = \sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\
g = \tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\ o =
\sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i
* g \\ h' = o * \tanh(c') \\ \end{array}
where \sigma is the sigmoid function, and * is the Hadamard
product.
Parameters:
* input_size (int) -- The number of expected features in
the input x
* **hidden_size** (*int*) -- The number of features in the
hidden state *h*
* **bias** (*bool*) -- If "False", then the layer does not use
bias weights *b_ih* and *b_hh*. Default: "True"
Inputs: input, (h_0, c_0)
* input of shape (batch, input_size) or (input_size):
tensor containing input features | https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html | pytorch docs |
tensor containing input features
* **h_0** of shape *(batch, hidden_size)* or *(hidden_size)*:
tensor containing the initial hidden state
* **c_0** of shape *(batch, hidden_size)* or *(hidden_size)*:
tensor containing the initial cell state
If *(h_0, c_0)* is not provided, both **h_0** and **c_0**
default to zero.
Outputs: (h_1, c_1)
* h_1 of shape (batch, hidden_size) or (hidden_size):
tensor containing the next hidden state
* **c_1** of shape *(batch, hidden_size)* or *(hidden_size)*:
tensor containing the next cell state
Variables:
* weight_ih (torch.Tensor) -- the learnable input-hidden
weights, of shape (4hidden_size, input_size)*
* **weight_hh** (*torch.Tensor*) -- the learnable hidden-hidden
weights, of shape *(4*hidden_size, hidden_size)*
* **bias_ih** -- the learnable input-hidden bias, of shape
*(4*hidden_size)*
| https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html | pytorch docs |
(4hidden_size)*
* **bias_hh** -- the learnable hidden-hidden bias, of shape
*(4*hidden_size)*
Note:
All the weights and biases are initialized from
\mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{1}{\text{hidden\_size}}
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward.
Examples:
>>> rnn = nn.LSTMCell(10, 20) # (input_size, hidden_size)
>>> input = torch.randn(2, 3, 10) # (time_steps, batch, input_size)
>>> hx = torch.randn(3, 20) # (batch, hidden_size)
>>> cx = torch.randn(3, 20)
>>> output = []
>>> for i in range(input.size()[0]):
... hx, cx = rnn(input[i], (hx, cx))
... output.append(hx)
>>> output = torch.stack(output, dim=0)
| https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html | pytorch docs |
conv3d
class torch.ao.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)
Applies a 3D convolution over a quantized 3D input composed of
several input planes.
See "Conv3d" for details and output shape.
Parameters:
* input -- quantized input tensor of shape (\text{minibatch}
, \text{in_channels} , iD , iH , iW)
* **weight** -- quantized filters of shape (\text{out\_channels}
, \frac{\text{in\_channels}}{\text{groups}} , kD , kH , kW)
* **bias** -- **non-quantized** bias tensor of shape
(\text{out\_channels}). The tensor type must be *torch.float*.
* **stride** -- the stride of the convolving kernel. Can be a
single number or a tuple *(sD, sH, sW)*. Default: 1
* **padding** -- implicit paddings on both sides of the input.
Can be a single number or a tuple *(padD, padH, padW)*.
Default: 0
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html | pytorch docs |
Default: 0
* **dilation** -- the spacing between kernel elements. Can be a
single number or a tuple *(dD, dH, dW)*. Default: 1
* **groups** -- split input into groups, \text{in\_channels}
should be divisible by the number of groups. Default: 1
* **padding_mode** -- the padding mode to use. Only "zeros" is
supported for quantized convolution at the moment. Default:
"zeros"
* **scale** -- quantization scale for the output. Default: 1.0
* **zero_point** -- quantization zero_point for the output.
Default: 0
* **dtype** -- quantization data type to use. Default:
"torch.quint8"
Examples:
>>> from torch.ao.nn.quantized import functional as qF
>>> filters = torch.randn(8, 4, 3, 3, 3, dtype=torch.float)
>>> inputs = torch.randn(1, 4, 5, 5, 5, dtype=torch.float)
>>> bias = torch.randn(8, dtype=torch.float)
>>>
>>> scale, zero_point = 1.0, 0
>>> dtype_inputs = torch.quint8
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html | pytorch docs |
dtype_inputs = torch.quint8
>>> dtype_filters = torch.qint8
>>>
>>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)
>>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)
>>> qF.conv3d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html | pytorch docs |
torch.nn.utils.prune.is_pruned
torch.nn.utils.prune.is_pruned(module)
Check whether "module" is pruned by looking for "forward_pre_hooks"
in its modules that inherit from the "BasePruningMethod".
Parameters:
module (nn.Module) -- object that is either pruned or
unpruned
Returns:
binary answer to whether "module" is pruned.
-[ Examples ]-
from torch.nn.utils import prune
m = nn.Linear(5, 7)
print(prune.is_pruned(m))
False
prune.random_unstructured(m, name='weight', amount=0.2)
print(prune.is_pruned(m))
True
| https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.is_pruned.html | pytorch docs |
torch.Tensor.ndim
Tensor.ndim
Alias for "dim()" | https://pytorch.org/docs/stable/generated/torch.Tensor.ndim.html | pytorch docs |
max_pool1d
class torch.ao.nn.quantized.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)
Applies a 1D max pooling over a quantized input signal composed of
several quantized input planes.
Note:
The input quantization parameters are propagated to the output.
See "MaxPool1d" for details. | https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.max_pool1d.html | pytorch docs |
torch.cuda.ipc_collect
torch.cuda.ipc_collect()
Force collects GPU memory after it has been released by CUDA IPC.
Note:
Checks if any sent CUDA tensors could be cleaned from the memory.
Force closes shared memory file used for reference counting if
there is no active counters. Useful when the producer process
stopped actively sending tensors and want to release unused
memory.
| https://pytorch.org/docs/stable/generated/torch.cuda.ipc_collect.html | pytorch docs |
torch.Tensor.conj_physical_
Tensor.conj_physical_() -> Tensor
In-place version of "conj_physical()" | https://pytorch.org/docs/stable/generated/torch.Tensor.conj_physical_.html | pytorch docs |
torch.Tensor.view_as
Tensor.view_as(other) -> Tensor
View this tensor as the same size as "other". "self.view_as(other)"
is equivalent to "self.view(other.size())".
Please see "view()" for more information about "view".
Parameters:
other ("torch.Tensor") -- The result tensor has the same
size as "other". | https://pytorch.org/docs/stable/generated/torch.Tensor.view_as.html | pytorch docs |
torch.Tensor.mvlgamma_
Tensor.mvlgamma_(p) -> Tensor
In-place version of "mvlgamma()" | https://pytorch.org/docs/stable/generated/torch.Tensor.mvlgamma_.html | pytorch docs |
torch.add
torch.add(input, other, *, alpha=1, out=None) -> Tensor
Adds "other", scaled by "alpha", to "input".
\text{{out}}_i = \text{{input}}_i + \text{{alpha}} \times
\text{{other}}_i
Supports broadcasting to a common shape, type promotion, and
integer, float, and complex inputs.
Parameters:
* input (Tensor) -- the input tensor.
* **other** (*Tensor** or **Number*) -- the tensor or number to
add to "input".
Keyword Arguments:
* alpha (Number) -- the multiplier for "other".
* **out** (*Tensor**, **optional*) -- the output tensor.
Examples:
>>> a = torch.randn(4)
>>> a
tensor([ 0.0202, 1.0985, 1.3506, -0.6056])
>>> torch.add(a, 20)
tensor([ 20.0202, 21.0985, 21.3506, 19.3944])
>>> b = torch.randn(4)
>>> b
tensor([-0.9732, -0.3497, 0.6245, 0.4022])
>>> c = torch.randn(4, 1)
>>> c
tensor([[ 0.3743],
[-1.7724],
| https://pytorch.org/docs/stable/generated/torch.add.html | pytorch docs |
tensor([[ 0.3743],
[-1.7724],
[-0.5811],
[-0.8017]])
>>> torch.add(b, c, alpha=10)
tensor([[ 2.7695, 3.3930, 4.3672, 4.1450],
[-18.6971, -18.0736, -17.0994, -17.3216],
[ -6.7845, -6.1610, -5.1868, -5.4090],
[ -8.9902, -8.3667, -7.3925, -7.6147]]) | https://pytorch.org/docs/stable/generated/torch.add.html | pytorch docs |
torch.cuda.get_sync_debug_mode
torch.cuda.get_sync_debug_mode()
Returns current value of debug mode for cuda synchronizing
operations.
Return type:
int | https://pytorch.org/docs/stable/generated/torch.cuda.get_sync_debug_mode.html | pytorch docs |
torch.cat
torch.cat(tensors, dim=0, *, out=None) -> Tensor
Concatenates the given sequence of "seq" tensors in the given
dimension. All tensors must either have the same shape (except in
the concatenating dimension) or be empty.
"torch.cat()" can be seen as an inverse operation for
"torch.split()" and "torch.chunk()".
"torch.cat()" can be best understood via examples.
Parameters:
* tensors (sequence of Tensors) -- any python sequence of
tensors of the same type. Non-empty tensors provided must have
the same shape, except in the cat dimension.
* **dim** (*int**, **optional*) -- the dimension over which the
tensors are concatenated
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.randn(2, 3)
>>> x
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 0)
tensor([[ 0.6580, -1.0969, -0.4614],
| https://pytorch.org/docs/stable/generated/torch.cat.html | pytorch docs |
tensor([[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497],
[ 0.6580, -1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497]])
>>> torch.cat((x, x, x), 1)
tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580,
-1.0969, -0.4614],
[-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034,
-0.5790, 0.1497]]) | https://pytorch.org/docs/stable/generated/torch.cat.html | pytorch docs |
torch.load
torch.load(f, map_location=None, pickle_module=pickle, , weights_only=False, *pickle_load_args)
Loads an object saved with "torch.save()" from a file.
"torch.load()" uses Python's unpickling facilities but treats
storages, which underlie tensors, specially. They are first
deserialized on the CPU and are then moved to the device they were
saved from. If this fails (e.g. because the run time system doesn't
have certain devices), an exception is raised. However, storages
can be dynamically remapped to an alternative set of devices using
the "map_location" argument.
If "map_location" is a callable, it will be called once for each
serialized storage with two arguments: storage and location. The
storage argument will be the initial deserialization of the
storage, residing on the CPU. Each serialized storage has a
location tag associated with it which identifies the device it was
saved from, and this tag is the second argument passed to | https://pytorch.org/docs/stable/generated/torch.load.html | pytorch docs |
"map_location". The builtin location tags are "'cpu'" for CPU
tensors and "'cuda:device_id'" (e.g. "'cuda:2'") for CUDA tensors.
"map_location" should return either "None" or a storage. If
"map_location" returns a storage, it will be used as the final
deserialized object, already moved to the right device. Otherwise,
"torch.load()" will fall back to the default behavior, as if
"map_location" wasn't specified.
If "map_location" is a "torch.device" object or a string containing
a device tag, it indicates the location where all tensors should be
loaded.
Otherwise, if "map_location" is a dict, it will be used to remap
location tags appearing in the file (keys), to ones that specify
where to put the storages (values).
User extensions can register their own location tags and tagging
and deserialization methods using
"torch.serialization.register_package()".
Parameters:
* f (Union[str, PathLike, BinaryIO*, | https://pytorch.org/docs/stable/generated/torch.load.html | pytorch docs |
IO[bytes]*]) -- a file-like object (has to implement
"read()", "readline()", "tell()", and "seek()"), or a string
or os.PathLike object containing a file name
* **map_location**
(*Optional**[**Union**[**Callable**[**[**Tensor**, **str**]**,
**Tensor**]**, **device**, **str**, **Dict**[**str**,
**str**]**]**]*) -- a function, "torch.device", string or a
dict specifying how to remap storage locations
* **pickle_module** (*Optional**[**Any**]*) -- module used for
unpickling metadata and objects (has to match the
"pickle_module" used to serialize file)
* **weights_only** (*bool*) -- Indicates whether unpickler
should be restricted to loading only tensors, primitive types
and dictionaries
* **pickle_load_args** (*Any*) -- (Python 3 only) optional
keyword arguments passed over to "pickle_module.load()" and
"pickle_module.Unpickler()", e.g., "errors=...".
Return type: | https://pytorch.org/docs/stable/generated/torch.load.html | pytorch docs |
Return type:
Any
Warning:
"torch.load()" unless *weights_only* parameter is set to *True*,
uses "pickle" module implicitly, which is known to be insecure.
It is possible to construct malicious pickle data which will
execute arbitrary code during unpickling. Never load data that
could have come from an untrusted source in an unsafe mode, or
that could have been tampered with. **Only load data you trust**.
Note:
When you call "torch.load()" on a file which contains GPU
tensors, those tensors will be loaded to GPU by default. You can
call "torch.load(.., map_location='cpu')" and then
"load_state_dict()" to avoid GPU RAM surge when loading a model
checkpoint.
Note:
By default, we decode byte strings as "utf-8". This is to avoid
a common error case "UnicodeDecodeError: 'ascii' codec can't
decode byte 0x..." when loading files saved by Python 2 in Python
| https://pytorch.org/docs/stable/generated/torch.load.html | pytorch docs |
If this default is incorrect, you may use an extra "encoding"
keyword argument to specify how these objects should be loaded,
e.g., "encoding='latin1'" decodes them to strings using "latin1"
encoding, and "encoding='bytes'" keeps them as byte arrays which
can be decoded later with "byte_array.decode(...)".
-[ Example ]-
torch.load('tensors.pt')
# Load all tensors onto the CPU
torch.load('tensors.pt', map_location=torch.device('cpu'))
# Load all tensors onto the CPU, using a function
torch.load('tensors.pt', map_location=lambda storage, loc: storage)
# Load all tensors onto GPU 1
torch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))
# Map tensors from GPU 1 to GPU 0
torch.load('tensors.pt', map_location={'cuda:1': 'cuda:0'})
# Load tensor from io.BytesIO object
with open('tensor.pt', 'rb') as f:
... buffer = io.BytesIO(f.read())
torch.load(buffer)
| https://pytorch.org/docs/stable/generated/torch.load.html | pytorch docs |
torch.load(buffer)
# Load a module with 'ascii' encoding for unpickling
torch.load('module.pt', encoding='ascii')
| https://pytorch.org/docs/stable/generated/torch.load.html | pytorch docs |
torch.Tensor.unflatten
Tensor.unflatten(dim, sizes) -> Tensor
See "torch.unflatten()". | https://pytorch.org/docs/stable/generated/torch.Tensor.unflatten.html | pytorch docs |
torch.quantile
torch.quantile(input, q, dim=None, keepdim=False, *, interpolation='linear', out=None) -> Tensor
Computes the q-th quantiles of each row of the "input" tensor along
the dimension "dim".
To compute the quantile, we map q in [0, 1] to the range of indices
[0, n] to find the location of the quantile in the sorted input. If
the quantile lies between two data points "a < b" with indices "i"
and "j" in the sorted order, result is computed according to the
given "interpolation" method as follows:
"linear": "a + (b - a) * fraction", where "fraction" is the
fractional part of the computed quantile index.
"lower": "a".
"higher": "b".
"nearest": "a" or "b", whichever's index is closer to the
computed quantile index (rounding down for .5 fractions).
"midpoint": "(a + b) / 2".
If "q" is a 1D tensor, the first dimension of the output represents
the quantiles and has size equal to the size of "q", the remaining | https://pytorch.org/docs/stable/generated/torch.quantile.html | pytorch docs |
dimensions are what remains from the reduction.
Note:
By default "dim" is "None" resulting in the "input" tensor being
flattened before computation.
Parameters:
* input (Tensor) -- the input tensor.
* **q** (*float** or **Tensor*) -- a scalar or 1D tensor of
values in the range [0, 1].
* **dim** (*int*) -- the dimension to reduce.
* **keepdim** (*bool*) -- whether the output tensor has "dim"
retained or not.
Keyword Arguments:
* interpolation (str) -- interpolation method to use when
the desired quantile lies between two data points. Can be
"linear", "lower", "higher", "midpoint" and "nearest". Default
is "linear".
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> a = torch.randn(2, 3)
>>> a
tensor([[ 0.0795, -1.2117, 0.9765],
[ 1.1707, 0.6706, 0.4884]])
>>> q = torch.tensor([0.25, 0.5, 0.75])
| https://pytorch.org/docs/stable/generated/torch.quantile.html | pytorch docs |
q = torch.tensor([0.25, 0.5, 0.75])
>>> torch.quantile(a, q, dim=1, keepdim=True)
tensor([[[-0.5661],
[ 0.5795]],
[[ 0.0795],
[ 0.6706]],
[[ 0.5280],
[ 0.9206]]])
>>> torch.quantile(a, q, dim=1, keepdim=True).shape
torch.Size([3, 2, 1])
>>> a = torch.arange(4.)
>>> a
tensor([0., 1., 2., 3.])
>>> torch.quantile(a, 0.6, interpolation='linear')
tensor(1.8000)
>>> torch.quantile(a, 0.6, interpolation='lower')
tensor(1.)
>>> torch.quantile(a, 0.6, interpolation='higher')
tensor(2.)
>>> torch.quantile(a, 0.6, interpolation='midpoint')
tensor(1.5000)
>>> torch.quantile(a, 0.6, interpolation='nearest')
tensor(2.)
>>> torch.quantile(a, 0.4, interpolation='nearest')
tensor(1.)
| https://pytorch.org/docs/stable/generated/torch.quantile.html | pytorch docs |
torch.Tensor.to
Tensor.to(args, *kwargs) -> Tensor
Performs Tensor dtype and/or device conversion. A "torch.dtype" and
"torch.device" are inferred from the arguments of "self.to(args,
*kwargs)".
Note:
If the "self" Tensor already has the correct "torch.dtype" and
"torch.device", then "self" is returned. Otherwise, the returned
tensor is a copy of "self" with the desired "torch.dtype" and
"torch.device".
Here are the ways to call "to":
to(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor
Returns a Tensor with the specified "dtype"
Args:
memory_format ("torch.memory_format", optional): the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
torch.to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor
Returns a Tensor with the specified "device" and (optional)
| https://pytorch.org/docs/stable/generated/torch.Tensor.to.html | pytorch docs |
"dtype". If "dtype" is "None" it is inferred to be
"self.dtype". When "non_blocking", tries to convert
asynchronously with respect to the host if possible, e.g.,
converting a CPU Tensor with pinned memory to a CUDA Tensor.
When "copy" is set, a new Tensor is created even when the
Tensor already matches the desired conversion.
Args:
memory_format ("torch.memory_format", optional): the
desired memory format of returned Tensor. Default:
"torch.preserve_format".
torch.to(other, non_blocking=False, copy=False) -> Tensor
Returns a Tensor with same "torch.dtype" and "torch.device"
as the Tensor "other". When "non_blocking", tries to convert
asynchronously with respect to the host if possible, e.g.,
converting a CPU Tensor with pinned memory to a CUDA Tensor.
When "copy" is set, a new Tensor is created even when the
| https://pytorch.org/docs/stable/generated/torch.Tensor.to.html | pytorch docs |
Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu
>>> tensor.to(torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64)
>>> cuda0 = torch.device('cuda:0')
>>> tensor.to(cuda0)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], device='cuda:0')
>>> tensor.to(cuda0, dtype=torch.float64)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
>>> other = torch.randn((), dtype=torch.float64, device=cuda0)
>>> tensor.to(other, non_blocking=True)
tensor([[-0.5044, 0.0005],
[ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
| https://pytorch.org/docs/stable/generated/torch.Tensor.to.html | pytorch docs |
torch.Tensor.gcd
Tensor.gcd(other) -> Tensor
See "torch.gcd()" | https://pytorch.org/docs/stable/generated/torch.Tensor.gcd.html | pytorch docs |
torch.Tensor.baddbmm
Tensor.baddbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor
See "torch.baddbmm()" | https://pytorch.org/docs/stable/generated/torch.Tensor.baddbmm.html | pytorch docs |
add_quant_dequant
class torch.quantization.add_quant_dequant(module)
Wrap the leaf child module in QuantWrapper if it has a valid
qconfig Note that this function will modify the children of module
inplace and it can return a new module which wraps the input module
as well.
Parameters:
* module -- input module with qconfig attributes for all the
leaf modules
* **quantize** (*that we want to*) --
Returns:
Either the inplace modified module with submodules wrapped in
QuantWrapper based on qconfig or a new QuantWrapper module
which wraps the input module, the latter case only happens when
the input module is a leaf module and we want to quantize it. | https://pytorch.org/docs/stable/generated/torch.quantization.add_quant_dequant.html | pytorch docs |
RecordingObserver
class torch.quantization.observer.RecordingObserver(dtype=torch.quint8, **kwargs)
The module is mainly for debug and records the tensor values during
runtime.
Parameters:
* dtype -- Quantized data type
* **qscheme** -- Quantization scheme to be used
* **reduce_range** -- Reduces the range of the quantized data
type by 1 bit
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.RecordingObserver.html | pytorch docs |
torch.Tensor.is_signed
Tensor.is_signed() -> bool
Returns True if the data type of "self" is a signed data type. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_signed.html | pytorch docs |
torch.broadcast_to
torch.broadcast_to(input, shape) -> Tensor
Broadcasts "input" to the shape "shape". Equivalent to calling
"input.expand(shape)". See "expand()" for details.
Parameters:
* input (Tensor) -- the input tensor.
* **shape** (list, tuple, or "torch.Size") -- the new shape.
Example:
>>> x = torch.tensor([1, 2, 3])
>>> torch.broadcast_to(x, (3, 3))
tensor([[1, 2, 3],
[1, 2, 3],
[1, 2, 3]])
| https://pytorch.org/docs/stable/generated/torch.broadcast_to.html | pytorch docs |
Hardswish
class torch.nn.Hardswish(inplace=False)
Applies the Hardswish function, element-wise, as described in the
paper: Searching for MobileNetV3.
Hardswish is defined as:
\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3,
\\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 &
\text{otherwise} \end{cases}
Parameters:
inplace (bool) -- can optionally do the operation in-
place. Default: "False"
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Hardswish()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Hardswish.html | pytorch docs |
torch.Tensor.greater_
Tensor.greater_(other) -> Tensor
In-place version of "greater()". | https://pytorch.org/docs/stable/generated/torch.Tensor.greater_.html | pytorch docs |
ReduceLROnPlateau
class torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)
Reduce learning rate when a metric has stopped improving. Models
often benefit from reducing the learning rate by a factor of 2-10
once learning stagnates. This scheduler reads a metrics quantity
and if no improvement is seen for a 'patience' number of epochs,
the learning rate is reduced.
Parameters:
* optimizer (Optimizer) -- Wrapped optimizer.
* **mode** (*str*) -- One of *min*, *max*. In *min* mode, lr
will be reduced when the quantity monitored has stopped
decreasing; in *max* mode it will be reduced when the quantity
monitored has stopped increasing. Default: 'min'.
* **factor** (*float*) -- Factor by which the learning rate will
be reduced. new_lr = lr * factor. Default: 0.1.
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html | pytorch docs |
patience (int) -- Number of epochs with no improvement
after which learning rate will be reduced. For example, if
patience = 2, then we will ignore the first 2 epochs with no
improvement, and will only decrease the LR after the 3rd epoch
if the loss still hasn't improved then. Default: 10.
threshold (float) -- Threshold for measuring the new
optimum, to only focus on significant changes. Default: 1e-4.
threshold_mode (str) -- One of rel, abs. In rel
mode, dynamic_threshold = best * ( 1 + threshold ) in 'max'
mode or best * ( 1 - threshold ) in min mode. In abs mode,
dynamic_threshold = best + threshold in max mode or best -
threshold in min mode. Default: 'rel'.
cooldown (int) -- Number of epochs to wait before
resuming normal operation after lr has been reduced. Default:
0.
min_lr (float or list) -- A scalar or a list of
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html | pytorch docs |
scalars. A lower bound on the learning rate of all param
groups or each group respectively. Default: 0.
* **eps** (*float*) -- Minimal decay applied to lr. If the
difference between new and old lr is smaller than eps, the
update is ignored. Default: 1e-8.
* **verbose** (*bool*) -- If "True", prints a message to stdout
for each update. Default: "False".
-[ Example ]-
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
scheduler = ReduceLROnPlateau(optimizer, 'min')
for epoch in range(10):
train(...)
val_loss = validate(...)
# Note that step should be called after validate()
scheduler.step(val_loss)
| https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html | pytorch docs |
UninitializedBuffer
class torch.nn.parameter.UninitializedBuffer(requires_grad=False, device=None, dtype=None)
A buffer that is not initialized.
Uninitialized Buffer is a a special case of "torch.Tensor" where
the shape of the data is still unknown.
Unlike a "torch.Tensor", uninitialized parameters hold no data and
attempting to access some properties, like their shape, will throw
a runtime error. The only operations that can be performed on a
uninitialized parameter are changing its datatype, moving it to a
different device and converting it to a regular "torch.Tensor".
The default device or dtype to use when the buffer is materialized
can be set during construction using e.g. "device='cuda'". | https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedBuffer.html | pytorch docs |
ConvTranspose3d
class torch.ao.nn.quantized.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
Applies a 3D transposed convolution operator over an input image
composed of several input planes. For details on input arguments,
parameters, and implementation see "ConvTranspose3d".
Note:
Currently only the FBGEMM engine is implemented. Please, set the
*torch.backends.quantized.engine = 'fbgemm'*
For special notes, please, see "Conv3d"
Variables:
* weight (Tensor) -- packed tensor derived from the
learnable weight parameter.
* **scale** (*Tensor*) -- scalar for the output scale
* **zero_point** (*Tensor*) -- scalar for the output zero point
See "ConvTranspose3d" for other attributes.
Examples:
>>> torch.backends.quantized.engine = 'fbgemm'
>>> from torch.nn import quantized as nnq
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html | pytorch docs |
from torch.nn import quantized as nnq
>>> # With cubic kernels and equal stride
>>> m = nnq.ConvTranspose3d(16, 33, 3, stride=2)
>>> # non-cubic kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose3d(16, 33, (3, 3, 5), stride=(2, 1, 1), padding=(4, 2, 2))
>>> input = torch.randn(20, 16, 50, 100, 100)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12, 12, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv3d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose3d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6, 6, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html | pytorch docs |
output.size()
torch.Size([1, 16, 12, 12, 12])
| https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html | pytorch docs |
torch.cuda.is_initialized
torch.cuda.is_initialized()
Returns whether PyTorch's CUDA state has been initialized. | https://pytorch.org/docs/stable/generated/torch.cuda.is_initialized.html | pytorch docs |
torch.autograd.function.FunctionCtx.mark_dirty
FunctionCtx.mark_dirty(*args)
Marks given tensors as modified in an in-place operation.
This should be called at most once, only from inside the
"forward()" method, and all arguments should be inputs.
Every tensor that's been modified in-place in a call to "forward()"
should be given to this function, to ensure correctness of our
checks. It doesn't matter whether the function is called before or
after modification.
Examples::
>>> class Inplace(Function):
>>> @staticmethod
>>> def forward(ctx, x):
>>> x_npy = x.numpy() # x_npy shares storage with x
>>> x_npy += 1
>>> ctx.mark_dirty(x)
>>> return x
>>>
>>> @staticmethod
>>> @once_differentiable
>>> def backward(ctx, grad_output):
>>> return grad_output
>>> | https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_dirty.html | pytorch docs |
return grad_output
>>>
>>> a = torch.tensor(1., requires_grad=True, dtype=torch.double).clone()
>>> b = a * a
>>> Inplace.apply(a) # This would lead to wrong gradients!
>>> # but the engine would not know unless we mark_dirty
>>> b.backward() # RuntimeError: one of the variables needed for gradient
>>> # computation has been modified by an inplace operation
| https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_dirty.html | pytorch docs |
torch.Tensor.index_put
Tensor.index_put(indices, values, accumulate=False) -> Tensor
Out-place version of "index_put_()". | https://pytorch.org/docs/stable/generated/torch.Tensor.index_put.html | pytorch docs |
torch.quantize_per_tensor
torch.quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor
Converts a float tensor to a quantized tensor with given scale and
zero point.
Parameters:
* input (Tensor) -- float tensor or list of tensors to
quantize
* **scale** (*float** or **Tensor*) -- scale to apply in
quantization formula
* **zero_point** (*int** or **Tensor*) -- offset in integer
value that maps to float zero
* **dtype** ("torch.dtype") -- the desired data type of returned
tensor. Has to be one of the quantized dtypes: "torch.quint8",
"torch.qint8", "torch.qint32"
Returns:
A newly quantized tensor or list of quantized tensors.
Return type:
Tensor
Example:
>>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8)
tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,
| https://pytorch.org/docs/stable/generated/torch.quantize_per_tensor.html | pytorch docs |
quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)
>>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8).int_repr()
tensor([ 0, 10, 20, 30], dtype=torch.uint8)
>>> torch.quantize_per_tensor([torch.tensor([-1.0, 0.0]), torch.tensor([-2.0, 2.0])],
>>> torch.tensor([0.1, 0.2]), torch.tensor([10, 20]), torch.quint8)
(tensor([-1., 0.], size=(2,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10),
tensor([-2., 2.], size=(2,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=20))
>>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), torch.tensor(0.1), torch.tensor(10), torch.quint8)
tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,
quantization_scheme=torch.per_tensor_affine, scale=0.10, zero_point=10) | https://pytorch.org/docs/stable/generated/torch.quantize_per_tensor.html | pytorch docs |
torch.svd_lowrank
torch.svd_lowrank(A, q=6, niter=2, M=None)
Return the singular value decomposition "(U, S, V)" of a matrix,
batches of matrices, or a sparse matrix A such that A \approx U
diag(S) V^T. In case M is given, then SVD is computed for the
matrix A - M.
Note:
The implementation is based on the Algorithm 5.1 from Halko et
al, 2009.
Note:
To obtain repeatable results, reset the seed for the pseudorandom
number generator
Note:
The input is assumed to be a low-rank matrix.
Note:
In general, use the full-rank SVD implementation
"torch.linalg.svd()" for dense matrices due to its 10-fold higher
performance characteristics. The low-rank SVD will be useful for
huge sparse matrices that "torch.linalg.svd()" cannot handle.
Args::
A (Tensor): the input tensor of size (*, m, n)
q (int, optional): a slightly overestimated rank of A.
niter (int, optional): the number of subspace iterations to
| https://pytorch.org/docs/stable/generated/torch.svd_lowrank.html | pytorch docs |
conduct; niter must be a nonnegative integer, and defaults to
2
M (Tensor, optional): the input tensor's mean of size
(*, 1, n).
References::
* Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding
structure with randomness: probabilistic algorithms for
constructing approximate matrix decompositions,
arXiv:0909.4061 [math.NA; math.PR], 2009 (available at arXiv).
Return type:
Tuple[Tensor, Tensor, Tensor] | https://pytorch.org/docs/stable/generated/torch.svd_lowrank.html | pytorch docs |
torch.allclose
torch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool
This function checks if "input" and "other" satisfy the condition:
\lvert \text{input} - \text{other} \rvert \leq \texttt{atol} +
\texttt{rtol} \times \lvert \text{other} \rvert
elementwise, for all elements of "input" and "other". The behaviour
of this function is analogous to numpy.allclose
Parameters:
* input (Tensor) -- first tensor to compare
* **other** (*Tensor*) -- second tensor to compare
* **atol** (*float**, **optional*) -- absolute tolerance.
Default: 1e-08
* **rtol** (*float**, **optional*) -- relative tolerance.
Default: 1e-05
* **equal_nan** (*bool**, **optional*) -- if "True", then two
"NaN" s will be considered equal. Default: "False"
Example:
>>> torch.allclose(torch.tensor([10000., 1e-07]), torch.tensor([10000.1, 1e-08]))
False
| https://pytorch.org/docs/stable/generated/torch.allclose.html | pytorch docs |
False
>>> torch.allclose(torch.tensor([10000., 1e-08]), torch.tensor([10000.1, 1e-09]))
True
>>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]))
False
>>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]), equal_nan=True)
True | https://pytorch.org/docs/stable/generated/torch.allclose.html | pytorch docs |
FeatureAlphaDropout
class torch.nn.FeatureAlphaDropout(p=0.5, inplace=False)
Randomly masks out entire channels (a channel is a feature map,
e.g. the j-th channel of the i-th sample in the batch input is a
tensor \text{input}[i, j]) of the input tensor). Instead of setting
activations to zero, as in regular Dropout, the activations are set
to the negative saturation value of the SELU activation function.
More details can be found in the paper Self-Normalizing Neural
Networks .
Each element will be masked independently for each sample on every
forward call with probability "p" using samples from a Bernoulli
distribution. The elements to be masked are randomized on every
forward call, and scaled and shifted to maintain zero mean and unit
variance.
Usually the input comes from "nn.AlphaDropout" modules.
As described in the paper Efficient Object Localization Using
Convolutional Networks , if adjacent pixels within feature maps are | https://pytorch.org/docs/stable/generated/torch.nn.FeatureAlphaDropout.html | pytorch docs |
strongly correlated (as is normally the case in early convolution
layers) then i.i.d. dropout will not regularize the activations and
will otherwise just result in an effective learning rate decrease.
In this case, "nn.AlphaDropout()" will help promote independence
between feature maps and should be used instead.
Parameters:
* p (float, optional) -- probability of an element to
be zeroed. Default: 0.5
* **inplace** (*bool**, **optional*) -- If set to "True", will
do this operation in-place
Shape:
* Input: (N, C, D, H, W) or (C, D, H, W).
* Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input).
Examples:
>>> m = nn.FeatureAlphaDropout(p=0.2)
>>> input = torch.randn(20, 16, 4, 32, 32)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.FeatureAlphaDropout.html | pytorch docs |
torch.Tensor.vdot
Tensor.vdot(other) -> Tensor
See "torch.vdot()" | https://pytorch.org/docs/stable/generated/torch.Tensor.vdot.html | pytorch docs |
torch.cuda.memory_reserved
torch.cuda.memory_reserved(device=None)
Returns the current GPU memory managed by the caching allocator in
bytes for a given device.
Parameters:
device (torch.device or int, optional) -- selected
device. Returns statistic for the current device, given by
"current_device()", if "device" is "None" (default).
Return type:
int
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.memory_reserved.html | pytorch docs |
torch.foreach_acos
torch.foreach_acos(self: List[Tensor]) -> None
Apply "torch.acos()" to each Tensor of the input list. | https://pytorch.org/docs/stable/generated/torch._foreach_acos_.html | pytorch docs |
torch.sym_int
torch.sym_int(a)
SymInt-aware utility for int casting.
Parameters:
a (SymInt, SymFloat, or object) -- Object to cast | https://pytorch.org/docs/stable/generated/torch.sym_int.html | pytorch docs |
torch.fft.ifft2
torch.fft.ifft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor
Computes the 2 dimensional inverse discrete Fourier transform of
"input". Equivalent to "ifftn()" but IFFTs only the last two
dimensions by default.
Note:
Supports torch.half and torch.chalf on CUDA with GPU Architecture
SM53 or greater. However it only supports powers of 2 signal
length in every transformed dimensions.
Parameters:
* input (Tensor) -- the input tensor
* **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the
transformed dimensions. If given, each dimension "dim[i]" will
either be zero-padded or trimmed to the length "s[i]" before
computing the IFFT. If a length "-1" is specified, no padding
is done in that dimension. Default: "s = [input.size(d) for d
in dim]"
* **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be
transformed. Default: last two dimensions.
| https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html | pytorch docs |
norm (str, optional) --Normalization mode. For the backward transform ("ifft2()"),
these correspond to:
* ""forward"" - no normalization
* ""backward"" - normalize by "1/n"
* ""ortho"" - normalize by "1/sqrt(n)" (making the IFFT
orthonormal)
Where "n = prod(s)" is the logical IFFT size. Calling the
forward transform ("fft2()") with the same normalization mode
will apply an overall normalization of "1/n" between the two
transforms. This is required to make "ifft2()" the exact
inverse.
Default is ""backward"" (normalize by "1/n").
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
-[ Example ]-
x = torch.rand(10, 10, dtype=torch.complex64)
ifft2 = torch.fft.ifft2(x)
The discrete Fourier transform is separable, so "ifft2()" here is
equivalent to two one-dimensional "ifft()" calls: | https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html | pytorch docs |
two_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)
torch.testing.assert_close(ifft2, two_iffts, check_stride=False)
| https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html | pytorch docs |
torch.nn.functional.leaky_relu_
torch.nn.functional.leaky_relu_(input, negative_slope=0.01) -> Tensor
In-place version of "leaky_relu()". | https://pytorch.org/docs/stable/generated/torch.nn.functional.leaky_relu_.html | pytorch docs |
torch.Tensor.masked_fill_
Tensor.masked_fill_(mask, value)
Fills elements of "self" tensor with "value" where "mask" is True.
The shape of "mask" must be broadcastable with the shape of the
underlying tensor.
Parameters:
* mask (BoolTensor) -- the boolean mask
* **value** (*float*) -- the value to fill in with
| https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill_.html | pytorch docs |
default_weight_fake_quant
torch.quantization.fake_quantize.default_weight_fake_quant
alias of functools.partial(,
observer=,
quant_min=-128, quant_max=127, dtype=torch.qint8,
qscheme=torch.per_tensor_symmetric, reduce_range=False){} | https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_weight_fake_quant.html | pytorch docs |
ConvTranspose3d
class torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)
Applies a 3D transposed convolution operator over an input image
composed of several input planes. The transposed convolution
operator multiplies each input value element-wise by a learnable
kernel, and sums over the outputs from all input feature planes.
This module can be seen as the gradient of Conv3d with respect to
its input. It is also known as a fractionally-strided convolution
or a deconvolution (although it is not an actual deconvolution
operation as it does not compute a true inverse of convolution).
For more information, see the visualizations here and the
Deconvolutional Networks paper.
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will
use different precision for backward. | https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
use different precision for backward.
"stride" controls the stride for the cross-correlation.
"padding" controls the amount of implicit zero padding on both
sides for "dilation * (kernel_size - 1) - padding" number of
points. See note below for details.
"output_padding" controls the additional size added to one side
of the output shape. See note below for details.
"dilation" controls the spacing between the kernel points; also
known as the à trous algorithm. It is harder to describe, but the
link here has a nice visualization of what "dilation" does.
"groups" controls the connections between inputs and outputs.
"in_channels" and "out_channels" must both be divisible by
"groups". For example,
* At groups=1, all inputs are convolved to all outputs.
* At groups=2, the operation becomes equivalent to having two
conv layers side by side, each seeing half the input
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
channels and producing half the output channels, and both
subsequently concatenated.
* At groups= "in_channels", each input channel is convolved
with its own set of filters (of size
\frac{\text{out\_channels}}{\text{in\_channels}}).
The parameters "kernel_size", "stride", "padding", "output_padding"
can either be:
* a single "int" -- in which case the same value is used for the
depth, height and width dimensions
* a "tuple" of three ints -- in which case, the first *int* is
used for the depth dimension, the second *int* for the height
dimension and the third *int* for the width dimension
Note:
The "padding" argument effectively adds "dilation * (kernel_size
- 1) - padding" amount of zero padding to both sizes of the
input. This is set so that when a "Conv3d" and a
"ConvTranspose3d" are initialized with same parameters, they are
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
inverses of each other in regard to the input and output shapes.
However, when "stride > 1", "Conv3d" maps multiple input shapes
to the same output shape. "output_padding" is provided to resolve
this ambiguity by effectively increasing the calculated output
shape on one side. Note that "output_padding" is only used to
find output shape, but does not actually add zero-padding to
output.
Note:
In some circumstances when given tensors on a CUDA device and
using CuDNN, this operator may select a nondeterministic
algorithm to increase performance. If this is undesirable, you
can try to make the operation deterministic (potentially at a
performance cost) by setting "torch.backends.cudnn.deterministic
= True". See Reproducibility for more information.
Parameters:
* in_channels (int) -- Number of channels in the input
image
* **out_channels** (*int*) -- Number of channels produced by the
convolution
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
convolution
* **kernel_size** (*int** or **tuple*) -- Size of the convolving
kernel
* **stride** (*int** or **tuple**, **optional*) -- Stride of the
convolution. Default: 1
* **padding** (*int** or **tuple**, **optional*) -- "dilation *
(kernel_size - 1) - padding" zero-padding will be added to
both sides of each dimension in the input. Default: 0
* **output_padding** (*int** or **tuple**, **optional*) --
Additional size added to one side of each dimension in the
output shape. Default: 0
* **groups** (*int**, **optional*) -- Number of blocked
connections from input channels to output channels. Default: 1
* **bias** (*bool**, **optional*) -- If "True", adds a learnable
bias to the output. Default: "True"
* **dilation** (*int** or **tuple**, **optional*) -- Spacing
between kernel elements. Default: 1
Shape:
* Input: (N, C_{in}, D_{in}, H_{in}, W_{in}) or (C_{in}, D_{in}, | https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
H_{in}, W_{in})
* Output: (N, C_{out}, D_{out}, H_{out}, W_{out}) or (C_{out},
D_{out}, H_{out}, W_{out}), where
D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times
\text{padding}[0] + \text{dilation}[0] \times
(\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1
H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times
\text{padding}[1] + \text{dilation}[1] \times
(\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1
W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times
\text{padding}[2] + \text{dilation}[2] \times
(\text{kernel\_size}[2] - 1) + \text{output\_padding}[2] + 1
Variables:
* weight (Tensor) -- the learnable weights of the module
of shape (\text{in_channels},
\frac{\text{out_channels}}{\text{groups}},
\text{kernel_size[0]}, \text{kernel_size[1]}, | https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
\text{kernel_size[2]}). The values of these weights are
sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k =
\frac{groups}{C_\text{out} *
\prod_{i=0}^{2}\text{kernel_size}[i]}
* **bias** (*Tensor*) -- the learnable bias of the module of
shape (out_channels) If "bias" is "True", then the values of
these weights are sampled from \mathcal{U}(-\sqrt{k},
\sqrt{k}) where k = \frac{groups}{C_\text{out} *
\prod_{i=0}^{2}\text{kernel\_size}[i]}
Examples:
>>> # With square kernels and equal stride
>>> m = nn.ConvTranspose3d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nn.ConvTranspose3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2))
>>> input = torch.randn(20, 16, 10, 50, 100)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html | pytorch docs |
torch.cholesky_inverse
torch.cholesky_inverse(input, upper=False, *, out=None) -> Tensor
Computes the inverse of a symmetric positive-definite matrix A
using its Cholesky factor u: returns matrix "inv". The inverse is
computed using LAPACK routines "dpotri" and "spotri" (and the
corresponding MAGMA routines).
If "upper" is "False", u is lower triangular such that the returned
tensor is
inv = (uu^{{T}})^{{-1}}
If "upper" is "True" or not provided, u is upper triangular such
that the returned tensor is
inv = (u^T u)^{{-1}}
Supports input of float, double, cfloat and cdouble dtypes. Also
supports batches of matrices, and if A is a batch of matrices then
the output has the same batch dimensions.
Parameters:
* input (Tensor) -- the input tensor A of size (*, n, n),
consisting of symmetric positive-definite matrices where * is
zero or more batch dimensions. | https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html | pytorch docs |
zero or more batch dimensions.
* **upper** (*bool**, **optional*) -- flag that indicates
whether to return a upper or lower triangular matrix. Default:
False
Keyword Arguments:
out (Tensor, optional) -- the output tensor for inv
Example:
>>> a = torch.randn(3, 3)
>>> a = torch.mm(a, a.t()) + 1e-05 * torch.eye(3) # make symmetric positive definite
>>> u = torch.linalg.cholesky(a)
>>> a
tensor([[ 0.9935, -0.6353, 1.5806],
[ -0.6353, 0.8769, -1.7183],
[ 1.5806, -1.7183, 10.6618]])
>>> torch.cholesky_inverse(u)
tensor([[ 1.9314, 1.2251, -0.0889],
[ 1.2251, 2.4439, 0.2122],
[-0.0889, 0.2122, 0.1412]])
>>> a.inverse()
tensor([[ 1.9314, 1.2251, -0.0889],
[ 1.2251, 2.4439, 0.2122],
[-0.0889, 0.2122, 0.1412]])
>>> a = torch.randn(3, 2, 2) # Example for batched input
| https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html | pytorch docs |
a = a @ a.mT + 1e-03 # make symmetric positive-definite
>>> l = torch.linalg.cholesky(a)
>>> z = l @ l.mT
>>> torch.dist(z, a)
tensor(3.5894e-07)
| https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html | pytorch docs |
torch.Tensor.isfinite
Tensor.isfinite() -> Tensor
See "torch.isfinite()" | https://pytorch.org/docs/stable/generated/torch.Tensor.isfinite.html | pytorch docs |
GroupNorm
class torch.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, device=None, dtype=None)
Applies Group Normalization over a mini-batch of inputs as
described in the paper Group Normalization
y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The input channels are separated into "num_groups" groups, each
containing "num_channels / num_groups" channels. "num_channels"
must be divisible by "num_groups". The mean and standard-deviation
are calculated separately over the each group. \gamma and \beta are
learnable per-channel affine transform parameter vectors of size
"num_channels" if "affine" is "True". The standard-deviation is
calculated via the biased estimator, equivalent to
torch.var(input, unbiased=False).
This layer uses statistics computed from input data in both
training and evaluation modes.
Parameters:
* num_groups (int) -- number of groups to separate the | https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html | pytorch docs |
channels into
* **num_channels** (*int*) -- number of channels expected in
input
* **eps** (*float*) -- a value added to the denominator for
numerical stability. Default: 1e-5
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable per-channel affine
parameters initialized to ones (for weights) and zeros (for
biases). Default: "True".
Shape:
* Input: (N, C, *) where C=\text{num_channels}
* Output: (N, C, *) (same shape as input)
Examples:
>>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html | pytorch docs |
torch.cuda.caching_allocator_alloc
torch.cuda.caching_allocator_alloc(size, device=None, stream=None)
Performs a memory allocation using the CUDA memory allocator.
Memory is allocated for a given device and a stream, this function
is intended to be used for interoperability with other frameworks.
Allocated memory is released through "caching_allocator_delete()".
Parameters:
* size (int) -- number of bytes to be allocated.
* **device** (*torch.device** or **int**, **optional*) --
selected device. If it is "None" the default CUDA device is
used.
* **stream** (*torch.cuda.Stream** or **int**, **optional*) --
selected stream. If is "None" then the default stream for the
selected device is used.
Note:
See Memory management for more details about GPU memory
management.
| https://pytorch.org/docs/stable/generated/torch.cuda.caching_allocator_alloc.html | pytorch docs |
BatchNorm1d
class torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)
Applies Batch Normalization over a 2D or 3D input as described in
the paper Batch Normalization: Accelerating Deep Network Training
by Reducing Internal Covariate Shift .
y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}}
* \gamma + \beta
The mean and standard-deviation are calculated per-dimension over
the mini-batches and \gamma and \beta are learnable parameter
vectors of size C (where C is the number of features or
channels of the input). By default, the elements of \gamma are set
to 1 and the elements of \beta are set to 0. The standard-deviation
is calculated via the biased estimator, equivalent to
torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates
of its computed mean and variance, which are then used for | https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html | pytorch docs |
normalization during evaluation. The running estimates are kept
with a default "momentum" of 0.1.
If "track_running_stats" is set to "False", this layer then does
not keep running estimates, and batch statistics are instead used
during evaluation time as well.
Note:
This "momentum" argument is different from one used in optimizer
classes and the conventional notion of momentum. Mathematically,
the update rule for running statistics here is \hat{x}_\text{new}
= (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times
x_t, where \hat{x} is the estimated statistic and x_t is the new
observed value.
Because the Batch Normalization is done over the C dimension,
computing statistics on (N, L) slices, it's common terminology to
call this Temporal Batch Normalization.
Parameters:
* num_features (int) -- number of features or channels C
of the input
* **eps** (*float*) -- a value added to the denominator for
| https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html | pytorch docs |
numerical stability. Default: 1e-5
* **momentum** (*float*) -- the value used for the running_mean
and running_var computation. Can be set to "None" for
cumulative moving average (i.e. simple average). Default: 0.1
* **affine** (*bool*) -- a boolean value that when set to
"True", this module has learnable affine parameters. Default:
"True"
* **track_running_stats** (*bool*) -- a boolean value that when
set to "True", this module tracks the running mean and
variance, and when set to "False", this module does not track
such statistics, and initializes statistics buffers
"running_mean" and "running_var" as "None". When these buffers
are "None", this module always uses batch statistics. in both
training and eval modes. Default: "True"
Shape:
* Input: (N, C) or (N, C, L), where N is the batch size, C is
the number of features or channels, and L is the sequence
length | https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html | pytorch docs |
length
* Output: (N, C) or (N, C, L) (same shape as input)
Examples:
>>> # With Learnable Parameters
>>> m = nn.BatchNorm1d(100)
>>> # Without Learnable Parameters
>>> m = nn.BatchNorm1d(100, affine=False)
>>> input = torch.randn(20, 100)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html | pytorch docs |
torch.nn.functional.hardsigmoid
torch.nn.functional.hardsigmoid(input, inplace=False)
Applies the element-wise function
\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le
-3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 &
\text{otherwise} \end{cases}
Parameters:
inplace (bool) -- If set to "True", will do this operation
in-place. Default: "False"
Return type:
Tensor
See "Hardsigmoid" for more details. | https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html | pytorch docs |
torch.cuda.synchronize
torch.cuda.synchronize(device=None)
Waits for all kernels in all streams on a CUDA device to complete.
Parameters:
device (torch.device or int, optional) -- device
for which to synchronize. It uses the current device, given by
"current_device()", if "device" is "None" (default). | https://pytorch.org/docs/stable/generated/torch.cuda.synchronize.html | pytorch docs |
torch.Tensor.logical_xor_
Tensor.logical_xor_() -> Tensor
In-place version of "logical_xor()" | https://pytorch.org/docs/stable/generated/torch.Tensor.logical_xor_.html | pytorch docs |
torch.addcmul
torch.addcmul(input, tensor1, tensor2, *, value=1, out=None) -> Tensor
Performs the element-wise multiplication of "tensor1" by "tensor2",
multiplies the result by the scalar "value" and adds it to "input".
\text{out}_i = \text{input}_i + \text{value} \times
\text{tensor1}_i \times \text{tensor2}_i
The shapes of "tensor", "tensor1", and "tensor2" must be
broadcastable.
For inputs of type FloatTensor or DoubleTensor, "value" must be
a real number, otherwise an integer.
Parameters:
* input (Tensor) -- the tensor to be added
* **tensor1** (*Tensor*) -- the tensor to be multiplied
* **tensor2** (*Tensor*) -- the tensor to be multiplied
Keyword Arguments:
* value (Number, optional) -- multiplier for tensor1
.* tensor2
* **out** (*Tensor**, **optional*) -- the output tensor.
Example:
>>> t = torch.randn(1, 3)
>>> t1 = torch.randn(3, 1)
>>> t2 = torch.randn(1, 3)
| https://pytorch.org/docs/stable/generated/torch.addcmul.html | pytorch docs |
t2 = torch.randn(1, 3)
>>> torch.addcmul(t, t1, t2, value=0.1)
tensor([[-0.8635, -0.6391, 1.6174],
[-0.7617, -0.5879, 1.7388],
[-0.8353, -0.6249, 1.6511]])
| https://pytorch.org/docs/stable/generated/torch.addcmul.html | pytorch docs |
torch.Tensor.multiply
Tensor.multiply(value) -> Tensor
See "torch.multiply()". | https://pytorch.org/docs/stable/generated/torch.Tensor.multiply.html | pytorch docs |
MovingAverageMinMaxObserver
class torch.quantization.observer.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, eps=1.1920928955078125e-07, **kwargs)
Observer module for computing the quantization parameters based on
the moving average of the min and max values.
This observer computes the quantization parameters based on the
moving averages of minimums and maximums of the incoming tensors.
The module records the average minimum and maximum of incoming
tensors, and uses this statistic to compute the quantization
parameters.
Parameters:
* averaging_constant -- Averaging constant for min/max.
* **dtype** -- dtype argument to the *quantize* node needed to
implement the reference model spec.
* **qscheme** -- Quantization scheme to be used
* **reduce_range** -- Reduces the range of the quantized data
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html | pytorch docs |
type by 1 bit
* **quant_min** -- Minimum quantization value. If unspecified,
it will follow the 8-bit setup.
* **quant_max** -- Maximum quantization value. If unspecified,
it will follow the 8-bit setup.
* **eps** (*Tensor*) -- Epsilon value for float32, Defaults to
*torch.finfo(torch.float32).eps*.
The moving average min/max is computed as follows
\begin{array}{ll} x_\text{min} = \begin{cases}
\min(X) & \text{if~}x_\text{min} = \text{None} \\ (1
- c) x_\text{min} + c \min(X) & \text{otherwise}
\end{cases}\\ x_\text{max} = \begin{cases}
\max(X) & \text{if~}x_\text{max} = \text{None} \\ (1
- c) x_\text{max} + c \max(X) & \text{otherwise}
\end{cases}\\ \end{array}
where x_\text{min/max} is the running average min/max, X is is the
incoming tensor, and c is the "averaging_constant".
The scale and zero point are then computed as in "MinMaxObserver".
Note: | https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html | pytorch docs |
Note:
Only works with "torch.per_tensor_affine" quantization scheme.
Note:
If the running minimum equals to the running maximum, the scale
and zero_point are set to 1.0 and 0.
| https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html | pytorch docs |
torch.cuda.get_gencode_flags
torch.cuda.get_gencode_flags()
Returns NVCC gencode flags this library was compiled with.
Return type:
str | https://pytorch.org/docs/stable/generated/torch.cuda.get_gencode_flags.html | pytorch docs |
Softsign
class torch.nn.Softsign
Applies the element-wise function:
\text{SoftSign}(x) = \frac{x}{ 1 + |x|}
Shape:
* Input: (*), where * means any number of dimensions.
* Output: (*), same shape as the input.
[image]
Examples:
>>> m = nn.Softsign()
>>> input = torch.randn(2)
>>> output = m(input)
| https://pytorch.org/docs/stable/generated/torch.nn.Softsign.html | pytorch docs |
torch.Tensor.is_shared
Tensor.is_shared()
Checks if tensor is in shared memory.
This is always "True" for CUDA tensors. | https://pytorch.org/docs/stable/generated/torch.Tensor.is_shared.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.