text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
and functions. Parameters: * obj (Callable, class, or nn.Module) -- The "nn.Module", function, class type, dictionary, or list to compile. * **example_inputs** (*Union**[**List**[**Tuple**]**, **Dict**[**Callable**, **List**[**Tuple**]**]**, **None**]*) -- Provide example inputs to annotate the arguments for a function or "nn.Module". Returns: If "obj" is "nn.Module", "script" returns a "ScriptModule" object. The returned "ScriptModule" will have the same set of sub-modules and parameters as the original "nn.Module". If "obj" is a standalone function, a "ScriptFunction" will be returned. If "obj" is a "dict", then "script" returns an instance of torch._C.ScriptDict. If "obj" is a "list", then "script" returns an instance of torch._C.ScriptList. Scripting a function The "@torch.jit.script" decorator will construct a
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
"ScriptFunction" by compiling the body of the function. Example (scripting a function): import torch @torch.jit.script def foo(x, y): if x.max() > y.max(): r = x else: r = y return r print(type(foo)) # torch.jit.ScriptFunction # See the compiled graph as Python code print(foo.code) # Call the function using the TorchScript interpreter foo(torch.ones(2, 2), torch.ones(2, 2)) **Scripting a function using example_inputs Example inputs can be used to annotate a function arguments. Example (annotating a function before scripting): import torch def test_sum(a, b): return a + b # Annotate the arguments to be int scripted_fn = torch.jit.script(test_sum, example_inputs=[(3, 4)]) print(type(scripted_fn)) # torch.jit.ScriptFunction
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
See the compiled graph as Python code print(scripted_fn.code) # Call the function using the TorchScript interpreter scripted_fn(20, 100) Scripting an nn.Module Scripting an "nn.Module" by default will compile the "forward" method and recursively compile any methods, submodules, and functions called by "forward". If a "nn.Module" only uses features supported in TorchScript, no changes to the original module code should be necessary. "script" will construct "ScriptModule" that has copies of the attributes, parameters, and methods of the original module. Example (scripting a simple module with a Parameter): import torch class MyModule(torch.nn.Module): def __init__(self, N, M): super(MyModule, self).__init__() # This parameter will be copied to the new ScriptModule self.weight = torch.nn.Parameter(torch.rand(N, M))
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
When this submodule is used, it will be compiled self.linear = torch.nn.Linear(N, M) def forward(self, input): output = self.weight.mv(input) # This calls the `forward` method of the `nn.Linear` module, which will # cause the `self.linear` submodule to be compiled to a `ScriptModule` here output = self.linear(output) return output scripted_module = torch.jit.script(MyModule(2, 3)) Example (scripting a module with traced submodules): import torch import torch.nn as nn import torch.nn.functional as F class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() # torch.jit.trace produces a ScriptModule's conv1 and conv2 self.conv1 = torch.jit.trace(nn.Conv2d(1, 20, 5), torch.rand(1, 1, 16, 16))
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
self.conv2 = torch.jit.trace(nn.Conv2d(20, 20, 5), torch.rand(1, 20, 16, 16)) def forward(self, input): input = F.relu(self.conv1(input)) input = F.relu(self.conv2(input)) return input scripted_module = torch.jit.script(MyModule()) To compile a method other than "forward" (and recursively compile anything it calls), add the "@torch.jit.export" decorator to the method. To opt out of compilation use "@torch.jit.ignore" or "@torch.jit.unused". Example (an exported and ignored method in a module): import torch import torch.nn as nn class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() @torch.jit.export def some_entry_point(self, input): return input + 10 @torch.jit.ignore def python_only_fn(self, input):
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
def python_only_fn(self, input): # This function won't be compiled, so any # Python APIs can be used import pdb pdb.set_trace() def forward(self, input): if self.training: self.python_only_fn(input) return input * 99 scripted_module = torch.jit.script(MyModule()) print(scripted_module.some_entry_point(torch.randn(2, 2))) print(scripted_module(torch.randn(2, 2))) Example ( Annotating forward of nn.Module using example_inputs): import torch import torch.nn as nn from typing import NamedTuple class MyModule(NamedTuple): result: List[int] class TestNNModule(torch.nn.Module): def forward(self, a) -> MyModule: result = MyModule(result=a) return result pdt_model = TestNNModule()
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
pdt_model = TestNNModule() # Runs the pdt_model in eager model with the inputs provided and annotates the arguments of forward scripted_model = torch.jit.script(pdt_model, example_inputs={pdt_model: [([10, 20, ], ), ], }) # Run the scripted_model with actual inputs print(scripted_model([20]))
https://pytorch.org/docs/stable/generated/torch.jit.script.html
pytorch docs
POE0001:node-missing-onnx-shape-inference Node is missing ONNX shape inference. This usually happens when the node is not valid under standard ONNX operator spec.
https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0001:node-missing-onnx-shape-inference.html
pytorch docs
POE0004:operator-supported-in-newer-opset-version Operator is supported in newer opset version. Example: torch.onnx.export(model, args, ..., opset_version=9)
https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0004:operator-supported-in-newer-opset-version.html
pytorch docs
POE0003:missing-standard-symbolic-function Missing symbolic function for standard PyTorch operator, cannot translate node to ONNX.
https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0003:missing-standard-symbolic-function.html
pytorch docs
POE0002:missing-custom-symbolic-function Missing symbolic function for custom PyTorch operator, cannot translate node to ONNX.
https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0002:missing-custom-symbolic-function.html
pytorch docs
ConvReLU3d class torch.ao.nn.intrinsic.qat.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None) A ConvReLU3d module is a fused module of Conv3d and ReLU, attached with FakeQuantize modules for weight for quantization aware training. We combined the interface of "Conv3d" and "BatchNorm3d". Variables: weight_fake_quant -- fake quant module for weight
https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvReLU3d.html
pytorch docs
torch.nn.functional.softmin torch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None) Applies a softmin function. Note that \text{Softmin}(x) = \text{Softmax}(-x). See softmax definition for mathematical formula. See "Softmin" for more details. Parameters: * input (Tensor) -- input * **dim** (*int*) -- A dimension along which softmin will be computed (so every slice along dim will sum to 1). * **dtype** ("torch.dtype", optional) -- the desired data type of returned tensor. If specified, the input tensor is casted to "dtype" before the operation is performed. This is useful for preventing data type overflows. Default: None. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.softmin.html
pytorch docs
prepare class torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None) Prepares a copy of the model for quantization calibration or quantization-aware training. Quantization configuration should be assigned preemptively to individual submodules in .qconfig attribute. The model will be attached with observer or fake quant modules, and qconfig will be propagated. Parameters: * model -- input model to be modified in-place * **inplace** -- carry out model transformations in-place, the original module is mutated * **allow_list** -- list of quantizable modules * **observer_non_leaf_module_list** -- list of non-leaf modules we want to add observer * **prepare_custom_config_dict** -- customization configuration dictionary for prepare function # Example of prepare_custom_config_dict: prepare_custom_config_dict = {
https://pytorch.org/docs/stable/generated/torch.quantization.prepare.html
pytorch docs
prepare_custom_config_dict = { # user will manually define the corresponding observed # module class which has a from_float class method that converts # float custom module to observed custom module "float_to_observed_custom_module_class": { CustomModule: ObservedCustomModule } }
https://pytorch.org/docs/stable/generated/torch.quantization.prepare.html
pytorch docs
GaussianNLLLoss class torch.nn.GaussianNLLLoss(*, full=False, eps=1e-06, reduction='mean') Gaussian negative log likelihood loss. The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the neural network. For a "target" tensor modelled as having Gaussian distribution with a tensor of expectations "input" and a tensor of positive variances "var" the loss is: \text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{input} - \text{target}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.} where "eps" is used for stability. By default, the constant term of the loss function is omitted unless "full" is "True". If "var" is not the same size as "input" (due to a homoscedastic assumption), it must either have a final dimension of 1 or have one fewer
https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html
pytorch docs
dimension (with all other sizes being the same) for correct broadcasting. Parameters: * full (bool, optional) -- include the constant term in the loss calculation. Default: "False". * **eps** (*float**, **optional*) -- value used to clamp "var" (see note below), for stability. Default: 1e-6. * **reduction** (*str**, **optional*) -- specifies the reduction to apply to the output:"'none'" | "'mean'" | "'sum'". "'none'": no reduction will be applied, "'mean'": the output is the average of all batch member losses, "'sum'": the output is the sum of all batch member losses. Default: "'mean'". Shape: * Input: (N, ) or () where * means any number of additional dimensions * Target: (N, *) or (*), same shape as the input, or same shape as the input but with one dimension equal to 1 (to allow for broadcasting) * Var: (N, *) or (*), same shape as the input, or same shape as
https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html
pytorch docs
the input but with one dimension equal to 1, or same shape as the input but with one fewer dimension (to allow for broadcasting) * Output: scalar if "reduction" is "'mean'" (default) or "'sum'". If "reduction" is "'none'", then (N, *), same shape as the input Examples:: >>> loss = nn.GaussianNLLLoss() >>> input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> var = torch.ones(5, 2, requires_grad=True) # heteroscedastic >>> output = loss(input, target, var) >>> output.backward() >>> loss = nn.GaussianNLLLoss() >>> input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> var = torch.ones(5, 1, requires_grad=True) # homoscedastic >>> output = loss(input, target, var) >>> output.backward() Note: The clamping of "var" is ignored with respect to autograd, and so the gradients are unaffected by it. Reference:
https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html
pytorch docs
Reference: Nix, D. A. and Weigend, A. S., "Estimating the mean and variance of the target probability distribution", Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138.
https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html
pytorch docs
convert_fx class torch.quantization.quantize_fx.convert_fx(graph_module, convert_custom_config=None, _remove_qconfig=True, qconfig_mapping=None, backend_config=None) Convert a calibrated or trained model to a quantized model Parameters: * graph_module (***) -- A prepared and calibrated/trained model (GraphModule) * **convert_custom_config** (***) -- custom configurations for convert function. See "ConvertCustomConfig" for more details * **_remove_qconfig** (***) -- Option to remove the qconfig attributes in the model after convert. * **qconfig_mapping** (***) -- config for specifying how to convert a model for quantization. The keys must include the ones in the qconfig_mapping passed to *prepare_fx* or *prepare_qat_fx*, with the same values or *None*. Additional keys can be specified with values set to *None*.
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html
pytorch docs
with values set to None. For each entry whose value is set to None, we skip quantizing that entry in the model: qconfig_mapping = QConfigMapping .set_global(qconfig_from_prepare) .set_object_type(torch.nn.functional.add, None) # skip quantizing torch.nn.functional.add .set_object_type(torch.nn.functional.linear, qconfig_from_prepare) .set_module_name("foo.bar", None) # skip quantizing module "foo.bar" * *backend_config* (BackendConfig): A configuration for the backend which describes how operators should be quantized in the backend, this includes quantization mode support (static/dynamic/weight_only), dtype support (quint8/qint8 etc.), observer placement for each operators and fused operators. See "BackendConfig" for more details Returns: A quantized model (torch.nn.Module) Return type:
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html
pytorch docs
Return type: Module Example: # prepared_model: the model after prepare_fx/prepare_qat_fx and calibration/training # convert_fx converts a calibrated/trained model to a quantized model for the # target hardware, this includes converting the model first to a reference # quantized model, and then lower the reference quantized model to a backend # Currently, the supported backends are fbgemm (onednn), qnnpack (xnnpack) and # they share the same set of quantized operators, so we are using the same # lowering procedure # # backend_config defines the corresponding reference quantized module for # the weighted modules in the model, e.g. nn.Linear # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack # e.g. backend_config = get_default_backend_config("fbgemm") quantized_model = convert_fx(prepared_model)
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html
pytorch docs
torch.Tensor.addmv_ Tensor.addmv_(mat, vec, *, beta=1, alpha=1) -> Tensor In-place version of "addmv()"
https://pytorch.org/docs/stable/generated/torch.Tensor.addmv_.html
pytorch docs
torch.gcd torch.gcd(input, other, *, out=None) -> Tensor Computes the element-wise greatest common divisor (GCD) of "input" and "other". Both "input" and "other" must have integer types. Note: This defines gcd(0, 0) = 0. Parameters: * input (Tensor) -- the input tensor. * **other** (*Tensor*) -- the second input tensor Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> a = torch.tensor([5, 10, 15]) >>> b = torch.tensor([3, 4, 5]) >>> torch.gcd(a, b) tensor([1, 2, 5]) >>> c = torch.tensor([3]) >>> torch.gcd(a, c) tensor([1, 1, 3])
https://pytorch.org/docs/stable/generated/torch.gcd.html
pytorch docs
torch.Tensor.arctan2 Tensor.arctan2(other) -> Tensor See "torch.arctan2()"
https://pytorch.org/docs/stable/generated/torch.Tensor.arctan2.html
pytorch docs
torch.arctan torch.arctan(input, *, out=None) -> Tensor Alias for "torch.atan()".
https://pytorch.org/docs/stable/generated/torch.arctan.html
pytorch docs
torch.Tensor.log_normal_ Tensor.log_normal_(mean=1, std=2, *, generator=None) Fills "self" tensor with numbers samples from the log-normal distribution parameterized by the given mean \mu and standard deviation \sigma. Note that "mean" and "std" are the mean and standard deviation of the underlying normal distribution, and not of the returned distribution: f(x) = \dfrac{1}{x \sigma \sqrt{2\pi}}\ e^{-\frac{(\ln x - \mu)^2}{2\sigma^2}}
https://pytorch.org/docs/stable/generated/torch.Tensor.log_normal_.html
pytorch docs
ConvBnReLU3d class torch.ao.nn.intrinsic.ConvBnReLU3d(conv, bn, relu) This is a sequential container which calls the Conv 3d, Batch Norm 3d, and ReLU modules. During quantization this will be replaced with the corresponding fused module.
https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU3d.html
pytorch docs
torch.func.functional_call torch.func.functional_call(module, parameter_and_buffer_dicts, args, kwargs=None, *, tie_weights=True) Performs a functional call on the module by replacing the module parameters and buffers with the provided ones. Note: If the module has active parametrizations, passing a value in the "parameters_and_buffers" argument with the name set to the regular parameter name will completely disable the parametrization. If you want to apply the parametrization function to the value passed please set the key as "{submodule_name}.parametrizations.{parameter_name}.original". Note: If the module performs in-place operations on parameters/buffers, these will be reflected in the "parameters_and_buffers" input. Example: >>> a = {'foo': torch.zeros(())} >>> mod = Foo() # does self.foo = self.foo + 1 >>> print(mod.foo) # tensor(0.)
https://pytorch.org/docs/stable/generated/torch.func.functional_call.html
pytorch docs
print(mod.foo) # tensor(0.) >>> functional_call(mod, a, torch.ones(())) >>> print(mod.foo) # tensor(0.) >>> print(a['foo']) # tensor(1.) Note: If the module has tied weights, whether or not functional_call respects the tying is determined by the tie_weights flag.Example: >>> a = {'foo': torch.zeros(())} >>> mod = Foo() # has both self.foo and self.foo_tied which are tied. Returns x + self.foo + self.foo_tied >>> print(mod.foo) # tensor(1.) >>> mod(torch.zeros(())) # tensor(2.) >>> functional_call(mod, a, torch.zeros(())) # tensor(0.) since it will change self.foo_tied too >>> functional_call(mod, a, torch.zeros(()), tie_weights=False) # tensor(1.)--self.foo_tied is not updated >>> new_a = {'foo', torch.zeros(()), 'foo_tied': torch.zeros(())} >>> functional_call(mod, new_a, torch.zeros()) # tensor(0.) An example of passing mutliple dictionaries
https://pytorch.org/docs/stable/generated/torch.func.functional_call.html
pytorch docs
An example of passing mutliple dictionaries a = ({'weight': torch.ones(1, 1)}, {'buffer': torch.zeros(1)}) # two separate dictionaries mod = nn.Bar(1, 1) # return self.weight @ x + self.buffer print(mod.weight) # tensor(...) print(mod.buffer) # tensor(...) x = torch.randn((1, 1)) print(x) functional_call(mod, a, x) # same as x print(mod.weight) # same as before functional_call And here is an example of applying the grad transform over the parameters of a model. import torch import torch.nn as nn from torch.func import functional_call, grad x = torch.randn(4, 3) t = torch.randn(4, 3) model = nn.Linear(3, 3) def compute_loss(params, x, t): y = functional_call(model, params, x) return nn.functional.mse_loss(y, t) grad_weights = grad(compute_loss)(dict(model.named_parameters()), x, t) Note: If the user does not need grad tracking outside of grad
https://pytorch.org/docs/stable/generated/torch.func.functional_call.html
pytorch docs
transforms, they can detach all of the parameters for better performance and memory usageExample: >>> detached_params = {k: v.detach() for k, v in model.named_parameters()} >>> grad_weights = grad(compute_loss)(detached_params, x, t) >>> grad_weights.grad_fn # None--it's not tracking gradients outside of grad This means that the user cannot call "grad_weight.backward()". However, if they don't need autograd tracking outside of the transforms, this will result in less memory usage and faster speeds. Parameters: * module (torch.nn.Module) -- the module to call * **parameters_and_buffers** (*Dict**[**str**,**Tensor**] or **tuple of Dict**[**str**, **Tensor**]*) -- the parameters that will be used in the module call. If given a tuple of dictionaries, they must have distinct keys so that all dictionaries can be used together * **args** (*Any** or **tuple*) -- arguments to be passed to the
https://pytorch.org/docs/stable/generated/torch.func.functional_call.html
pytorch docs
module call. If not a tuple, considered a single argument. * **kwargs** (*dict*) -- keyword arguments to be passed to the module call * **tie_weights** (*bool**, **optional*) -- If True, then parameters and buffers tied in the original model will be treated as tied in the reparamaterized version. Therefore, if True and different values are passed for the tied paramaters and buffers, it will error. If False, it will not respect the originally tied parameters and buffers unless the values passed for both weights are the same. Default: True. Returns: the result of calling "module". Return type: Any
https://pytorch.org/docs/stable/generated/torch.func.functional_call.html
pytorch docs
torch.linalg.eig torch.linalg.eig(A, *, out=None) Computes the eigenvalue decomposition of a square matrix if it exists. Letting \mathbb{K} be \mathbb{R} or \mathbb{C}, the eigenvalue decomposition of a square matrix A \in \mathbb{K}^{n \times n} (if it exists) is defined as A = V \operatorname{diag}(\Lambda) V^{-1}\mathrlap{\qquad V \in \mathbb{C}^{n \times n}, \Lambda \in \mathbb{C}^n} This decomposition exists if and only if A is diagonalizable. This is the case when all its eigenvalues are different. Supports input of float, double, cfloat and cdouble dtypes. Also supports batches of matrices, and if "A" is a batch of matrices then the output has the same batch dimensions. Note: The eigenvalues and eigenvectors of a real matrix may be complex. Note: When inputs are on a CUDA device, this function synchronizes that device with the CPU. Warning:
https://pytorch.org/docs/stable/generated/torch.linalg.eig.html
pytorch docs
device with the CPU. Warning: This function assumes that "A" is diagonalizable (for example, when all the eigenvalues are different). If it is not diagonalizable, the returned eigenvalues will be correct but A \neq V \operatorname{diag}(\Lambda)V^{-1}. Warning: The returned eigenvectors are normalized to have norm *1*. Even then, the eigenvectors of a matrix are not unique, nor are they continuous with respect to "A". Due to this lack of uniqueness, different hardware and software may compute different eigenvectors.This non-uniqueness is caused by the fact that multiplying an eigenvector by by e^{i \phi}, \phi \in \mathbb{R} produces another set of valid eigenvectors of the matrix. For this reason, the loss function shall not depend on the phase of the eigenvectors, as this quantity is not well-defined. This is checked when computing the gradients of this function. As such,
https://pytorch.org/docs/stable/generated/torch.linalg.eig.html
pytorch docs
when inputs are on a CUDA device, this function synchronizes that device with the CPU when computing the gradients. This is checked when computing the gradients of this function. As such, when inputs are on a CUDA device, the computation of the gradients of this function synchronizes that device with the CPU. Warning: Gradients computed using the *eigenvectors* tensor will only be finite when "A" has distinct eigenvalues. Furthermore, if the distance between any two eigenvalues is close to zero, the gradient will be numerically unstable, as it depends on the eigenvalues \lambda_i through the computation of \frac{1}{\min_{i \neq j} \lambda_i - \lambda_j}. See also: "torch.linalg.eigvals()" computes only the eigenvalues. Unlike "torch.linalg.eig()", the gradients of "eigvals()" are always numerically stable. "torch.linalg.eigh()" for a (faster) function that computes the
https://pytorch.org/docs/stable/generated/torch.linalg.eig.html
pytorch docs
eigenvalue decomposition for Hermitian and symmetric matrices. "torch.linalg.svd()" for a function that computes another type of spectral decomposition that works on matrices of any shape. "torch.linalg.qr()" for another (much faster) decomposition that works on matrices of any shape. Parameters: A (Tensor) -- tensor of shape (, n, n)* where *** is zero or more batch dimensions consisting of diagonalizable matrices. Keyword Arguments: out (tuple, optional) -- output tuple of two tensors. Ignored if None. Default: None. Returns: A named tuple (eigenvalues, eigenvectors) which corresponds to \Lambda and V above. *eigenvalues* and *eigenvectors* will always be complex-valued, even when "A" is real. The eigenvectors will be given by the columns of *eigenvectors*. Examples: >>> A = torch.randn(2, 2, dtype=torch.complex128) >>> A
https://pytorch.org/docs/stable/generated/torch.linalg.eig.html
pytorch docs
A tensor([[ 0.9828+0.3889j, -0.4617+0.3010j], [ 0.1662-0.7435j, -0.6139+0.0562j]], dtype=torch.complex128) >>> L, V = torch.linalg.eig(A) >>> L tensor([ 1.1226+0.5738j, -0.7537-0.1286j], dtype=torch.complex128) >>> V tensor([[ 0.9218+0.0000j, 0.1882-0.2220j], [-0.0270-0.3867j, 0.9567+0.0000j]], dtype=torch.complex128) >>> torch.dist(V @ torch.diag(L) @ torch.linalg.inv(V), A) tensor(7.7119e-16, dtype=torch.float64) >>> A = torch.randn(3, 2, 2, dtype=torch.float64) >>> L, V = torch.linalg.eig(A) >>> torch.dist(V @ torch.diag_embed(L) @ torch.linalg.inv(V), A) tensor(3.2841e-16, dtype=torch.float64)
https://pytorch.org/docs/stable/generated/torch.linalg.eig.html
pytorch docs
torch.nn.functional.hinge_embedding_loss torch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') -> Tensor See "HingeEmbeddingLoss" for details. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.hinge_embedding_loss.html
pytorch docs
DataParallel class torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0) Implements data parallelism at the module level. This container parallelizes the application of the given "module" by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. The batch size should be larger than the number of GPUs used. Warning: It is recommended to use "DistributedDataParallel", instead of this class, to do multi-GPU training, even if there is only a single node. See: Use nn.parallel.DistributedDataParallel instead of multiprocessing or nn.DataParallel and Distributed Data Parallel. Arbitrary positional and keyword inputs are allowed to be passed
https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
pytorch docs
into DataParallel but some types are specially handled. tensors will be scattered on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model's forward pass. The parallelized "module" must have its parameters and buffers on "device_ids[0]" before running this "DataParallel" module. Warning: In each forward, "module" is **replicated** on each device, so any updates to the running module in "forward" will be lost. For example, if "module" has a counter attribute that is incremented in each "forward", it will always stay at the initial value because the update is done on the replicas which are destroyed after "forward". However, "DataParallel" guarantees that the replica on "device[0]" will have its parameters and buffers sharing storage with the base parallelized "module". So **in-
https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
pytorch docs
place** updates to the parameters or buffers on "device[0]" will be recorded. E.g., "BatchNorm2d" and "spectral_norm()" rely on this behavior to update the buffers. Warning: Forward and backward hooks defined on "module" and its submodules will be invoked "len(device_ids)" times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set via "register_forward_pre_hook()" be executed before *all* "len(device_ids)" "forward()" calls, but that each such hook be executed before the corresponding "forward()" call of that device. Warning: When "module" returns a scalar (i.e., 0-dimensional tensor) in "forward()", this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device. Note:
https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
pytorch docs
from each device. Note: There is a subtlety in using the "pack sequence -> recurrent network -> unpack sequence" pattern in a "Module" wrapped in "DataParallel". See My recurrent network doesn't work with data parallelism section in FAQ for details. Parameters: * module (Module) -- module to be parallelized * **device_ids** (*list of python:int** or **torch.device*) -- CUDA devices (default: all devices) * **output_device** (*int** or **torch.device*) -- device location of output (default: device_ids[0]) Variables: module (Module) -- the module to be parallelized Example: >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var) # input_var can be on any device, including CPU
https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
pytorch docs
GLU class torch.nn.GLU(dim=- 1) Applies the gated linear unit function {GLU}(a, b)= a \otimes \sigma(b) where a is the first half of the input matrices and b is the second half. Parameters: dim (int) -- the dimension on which to split the input. Default: -1 Shape: * Input: (\ast_1, N, \ast_2) where *** means, any number of additional dimensions * Output: (\ast_1, M, \ast_2) where M=N/2 Examples: >>> m = nn.GLU() >>> input = torch.randn(4, 2) >>> output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.GLU.html
pytorch docs
torch.Tensor.diagflat Tensor.diagflat(offset=0) -> Tensor See "torch.diagflat()"
https://pytorch.org/docs/stable/generated/torch.Tensor.diagflat.html
pytorch docs
ReflectionPad1d class torch.nn.ReflectionPad1d(padding) Pads the input tensor using the reflection of the input boundary. For N-dimensional padding, use "torch.nn.functional.pad()". Parameters: padding (int, tuple) -- the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (\text{padding_left}, \text{padding_right}) Shape: * Input: (C, W_{in}) or (N, C, W_{in}). * Output: (C, W_{out}) or (N, C, W_{out}), where W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReflectionPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[2., 1., 0., 1., 2., 3., 2., 1.], [6., 5., 4., 5., 6., 7., 6., 5.]]]) >>> # using different paddings for different sides
https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html
pytorch docs
m = nn.ReflectionPad1d((3, 1)) >>> m(input) tensor([[[3., 2., 1., 0., 1., 2., 3., 2.], [7., 6., 5., 4., 5., 6., 7., 6.]]])
https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html
pytorch docs
conv1d class torch.ao.nn.quantized.functional.conv1d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) Applies a 1D convolution over a quantized 1D input composed of several input planes. See "Conv1d" for details and output shape. Parameters: * input -- quantized input tensor of shape (\text{minibatch} , \text{in_channels} , iW) * **weight** -- quantized filters of shape (\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , iW) * **bias** -- **non-quantized** bias tensor of shape (\text{out\_channels}). The tensor type must be *torch.float*. * **stride** -- the stride of the convolving kernel. Can be a single number or a tuple *(sW,)*. Default: 1 * **padding** -- implicit paddings on both sides of the input. Can be a single number or a tuple *(padW,)*. Default: 0
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html
pytorch docs
dilation -- the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1 groups -- split input into groups, \text{in_channels} should be divisible by the number of groups. Default: 1 padding_mode -- the padding mode to use. Only "zeros" is supported for quantized convolution at the moment. Default: "zeros" scale -- quantization scale for the output. Default: 1.0 zero_point -- quantization zero_point for the output. Default: 0 dtype -- quantization data type to use. Default: "torch.quint8" Examples: >>> from torch.ao.nn.quantized import functional as qF >>> filters = torch.randn(33, 16, 3, dtype=torch.float) >>> inputs = torch.randn(20, 16, 50, dtype=torch.float) >>> bias = torch.randn(33, dtype=torch.float) >>> >>> scale, zero_point = 1.0, 0 >>> dtype_inputs = torch.quint8
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html
pytorch docs
dtype_inputs = torch.quint8 >>> dtype_filters = torch.qint8 >>> >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters) >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs) >>> qF.conv1d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html
pytorch docs
conv2d class torch.ao.nn.quantized.functional.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8) Applies a 2D convolution over a quantized 2D input composed of several input planes. See "Conv2d" for details and output shape. Parameters: * input -- quantized input tensor of shape (\text{minibatch} , \text{in_channels} , iH , iW) * **weight** -- quantized filters of shape (\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kH , kW) * **bias** -- **non-quantized** bias tensor of shape (\text{out\_channels}). The tensor type must be *torch.float*. * **stride** -- the stride of the convolving kernel. Can be a single number or a tuple *(sH, sW)*. Default: 1 * **padding** -- implicit paddings on both sides of the input. Can be a single number or a tuple *(padH, padW)*. Default: 0
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html
pytorch docs
dilation -- the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1 groups -- split input into groups, \text{in_channels} should be divisible by the number of groups. Default: 1 padding_mode -- the padding mode to use. Only "zeros" is supported for quantized convolution at the moment. Default: "zeros" scale -- quantization scale for the output. Default: 1.0 zero_point -- quantization zero_point for the output. Default: 0 dtype -- quantization data type to use. Default: "torch.quint8" Examples: >>> from torch.ao.nn.quantized import functional as qF >>> filters = torch.randn(8, 4, 3, 3, dtype=torch.float) >>> inputs = torch.randn(1, 4, 5, 5, dtype=torch.float) >>> bias = torch.randn(8, dtype=torch.float) >>> >>> scale, zero_point = 1.0, 0 >>> dtype_inputs = torch.quint8
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html
pytorch docs
dtype_inputs = torch.quint8 >>> dtype_filters = torch.qint8 >>> >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters) >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs) >>> qF.conv2d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html
pytorch docs
torch.Tensor.exp_ Tensor.exp_() -> Tensor In-place version of "exp()"
https://pytorch.org/docs/stable/generated/torch.Tensor.exp_.html
pytorch docs
torch.manual_seed torch.manual_seed(seed) Sets the seed for generating random numbers. Returns a torch.Generator object. Parameters: seed (int) -- The desired seed. Value must be within the inclusive range [-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised. Negative inputs are remapped to positive values with the formula 0xffff_ffff_ffff_ffff + seed. Return type: Generator
https://pytorch.org/docs/stable/generated/torch.manual_seed.html
pytorch docs
torch.Tensor.register_hook Tensor.register_hook(hook) Registers a backward hook. The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature: hook(grad) -> Tensor or None The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of "grad". This function returns a handle with a method "handle.remove()" that removes the hook from the module. Note: See Backward Hooks execution for more information on how when this hook is executed, and how its execution is ordered relative to other hooks. Example: >>> v = torch.tensor([0., 0., 0.], requires_grad=True) >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient >>> v.backward(torch.tensor([1., 2., 3.])) >>> v.grad 2 4 6 [torch.FloatTensor of size (3,)] >>> h.remove() # removes the hook
https://pytorch.org/docs/stable/generated/torch.Tensor.register_hook.html
pytorch docs
torch.index_copy torch.index_copy(input, dim, index, source, *, out=None) -> Tensor See "index_add_()" for function description.
https://pytorch.org/docs/stable/generated/torch.index_copy.html
pytorch docs
torch.Tensor.atan2 Tensor.atan2(other) -> Tensor See "torch.atan2()"
https://pytorch.org/docs/stable/generated/torch.Tensor.atan2.html
pytorch docs
torch.set_warn_always torch.set_warn_always(b) When this flag is False (default) then some PyTorch warnings may only appear once per process. This helps avoid excessive warning information. Setting it to True causes these warnings to always appear, which may be helpful when debugging. Parameters: b ("bool") -- If True, force warnings to always be emitted If False, set to the default behaviour
https://pytorch.org/docs/stable/generated/torch.set_warn_always.html
pytorch docs
torch.nn.functional.pixel_unshuffle torch.nn.functional.pixel_unshuffle(input, downscale_factor) -> Tensor Reverses the "PixelShuffle" operation by rearranging elements in a tensor of shape (, C, H \times r, W \times r) to a tensor of shape (, C \times r^2, H, W), where r is the "downscale_factor". See "PixelUnshuffle" for details. Parameters: * input (Tensor) -- the input tensor * **downscale_factor** (*int*) -- factor to increase spatial resolution by Examples: >>> input = torch.randn(1, 1, 12, 12) >>> output = torch.nn.functional.pixel_unshuffle(input, 3) >>> print(output.size()) torch.Size([1, 9, 4, 4])
https://pytorch.org/docs/stable/generated/torch.nn.functional.pixel_unshuffle.html
pytorch docs
torch.nn.functional.sigmoid torch.nn.functional.sigmoid(input) -> Tensor Applies the element-wise function \text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)} See "Sigmoid" for more details.
https://pytorch.org/docs/stable/generated/torch.nn.functional.sigmoid.html
pytorch docs
Conv2d class torch.ao.nn.quantized.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None) Applies a 2D convolution over a quantized input signal composed of several quantized input planes. For details on input arguments, parameters, and implementation see "Conv2d". Note: Only *zeros* is supported for the "padding_mode" argument. Note: Only *torch.quint8* is supported for the input data type. Variables: * weight (Tensor) -- packed tensor derived from the learnable weight parameter. * **scale** (*Tensor*) -- scalar for the output scale * **zero_point** (*Tensor*) -- scalar for the output zero point See "Conv2d" for other attributes. Examples: >>> # With square kernels and equal stride >>> m = nn.quantized.Conv2d(16, 33, 3, stride=2) >>> # non-square kernels and unequal stride and with padding
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv2d.html
pytorch docs
m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >>> # non-square kernels and unequal stride and with padding and dilation >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> input = torch.randn(20, 16, 50, 100) >>> # quantize input to quint8 >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8) >>> output = m(q_input) classmethod from_float(mod) Creates a quantized module from a float module or qparams_dict. Parameters: **mod** (*Module*) -- a float module, either produced by torch.ao.quantization utilities or provided by the user
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv2d.html
pytorch docs
torch.bitwise_or torch.bitwise_or(input, other, *, out=None) -> Tensor Computes the bitwise OR of "input" and "other". The input tensor must be of integral or Boolean types. For bool tensors, it computes the logical OR. Parameters: * input -- the first input tensor * **other** -- the second input tensor Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> torch.bitwise_or(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8)) tensor([-1, -2, 3], dtype=torch.int8) >>> torch.bitwise_or(torch.tensor([True, True, False]), torch.tensor([False, True, False])) tensor([ True, True, False])
https://pytorch.org/docs/stable/generated/torch.bitwise_or.html
pytorch docs
torch.unsqueeze torch.unsqueeze(input, dim) -> Tensor Returns a new tensor with a dimension of size one inserted at the specified position. The returned tensor shares the same underlying data with this tensor. A "dim" value within the range "[-input.dim() - 1, input.dim() + 1)" can be used. Negative "dim" will correspond to "unsqueeze()" applied at "dim" = "dim + input.dim() + 1". Parameters: * input (Tensor) -- the input tensor. * **dim** (*int*) -- the index at which to insert the singleton dimension Example: >>> x = torch.tensor([1, 2, 3, 4]) >>> torch.unsqueeze(x, 0) tensor([[ 1, 2, 3, 4]]) >>> torch.unsqueeze(x, 1) tensor([[ 1], [ 2], [ 3], [ 4]])
https://pytorch.org/docs/stable/generated/torch.unsqueeze.html
pytorch docs
torch.set_num_threads torch.set_num_threads(int) Sets the number of threads used for intraop parallelism on CPU. Warning: To ensure that the correct number of threads is used, set_num_threads must be called before running eager, JIT or autograd code.
https://pytorch.org/docs/stable/generated/torch.set_num_threads.html
pytorch docs
torch.square torch.square(input, *, out=None) -> Tensor Returns a new tensor with the square of the elements of "input". Parameters: input (Tensor) -- the input tensor. Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> a = torch.randn(4) >>> a tensor([-2.0755, 1.0226, 0.0831, 0.4806]) >>> torch.square(a) tensor([ 4.3077, 1.0457, 0.0069, 0.2310])
https://pytorch.org/docs/stable/generated/torch.square.html
pytorch docs
torch.Tensor.double Tensor.double(memory_format=torch.preserve_format) -> Tensor "self.double()" is equivalent to "self.to(torch.float64)". See "to()". Parameters: memory_format ("torch.memory_format", optional) -- the desired memory format of returned Tensor. Default: "torch.preserve_format".
https://pytorch.org/docs/stable/generated/torch.Tensor.double.html
pytorch docs
torch.Tensor.i0_ Tensor.i0_() -> Tensor In-place version of "i0()"
https://pytorch.org/docs/stable/generated/torch.Tensor.i0_.html
pytorch docs
torch.all torch.all(input) -> Tensor Tests if all elements in "input" evaluate to True. Note: This function matches the behaviour of NumPy in returning output of dtype *bool* for all supported dtypes except *uint8*. For *uint8* the dtype of output is *uint8* itself. Example: >>> a = torch.rand(1, 2).bool() >>> a tensor([[False, True]], dtype=torch.bool) >>> torch.all(a) tensor(False, dtype=torch.bool) >>> a = torch.arange(0, 3) >>> a tensor([0, 1, 2]) >>> torch.all(a) tensor(False) torch.all(input, dim, keepdim=False, *, out=None) -> Tensor For each row of "input" in the given dimension "dim", returns True if all elements in the row evaluate to True and False otherwise. If "keepdim" is "True", the output tensor is of the same size as "input" except in the dimension "dim" where it is of size 1. Otherwise, "dim" is squeezed (see "torch.squeeze()"), resulting in
https://pytorch.org/docs/stable/generated/torch.all.html
pytorch docs
the output tensor having 1 fewer dimension than "input". Parameters: * input (Tensor) -- the input tensor. * **dim** (*int*) -- the dimension to reduce. * **keepdim** (*bool*) -- whether the output tensor has "dim" retained or not. Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> a = torch.rand(4, 2).bool() >>> a tensor([[True, True], [True, False], [True, True], [True, True]], dtype=torch.bool) >>> torch.all(a, dim=1) tensor([ True, False, True, True], dtype=torch.bool) >>> torch.all(a, dim=0) tensor([ True, False], dtype=torch.bool)
https://pytorch.org/docs/stable/generated/torch.all.html
pytorch docs
torch.Tensor.prod Tensor.prod(dim=None, keepdim=False, dtype=None) -> Tensor See "torch.prod()"
https://pytorch.org/docs/stable/generated/torch.Tensor.prod.html
pytorch docs
torch.lu_solve torch.lu_solve(b, LU_data, LU_pivots, *, out=None) -> Tensor Returns the LU solve of the linear system Ax = b using the partially pivoted LU factorization of A from "lu_factor()". This function supports "float", "double", "cfloat" and "cdouble" dtypes for "input". Warning: "torch.lu_solve()" is deprecated in favor of "torch.linalg.lu_solve()". "torch.lu_solve()" will be removed in a future PyTorch release. "X = torch.lu_solve(B, LU, pivots)" should be replaced with X = linalg.lu_solve(LU, pivots, B) Parameters: * b (Tensor) -- the RHS tensor of size (*, m, k), where * is zero or more batch dimensions. * **LU_data** (*Tensor*) -- the pivoted LU factorization of A from "lu_factor()" of size (*, m, m), where * is zero or more batch dimensions. * **LU_pivots** (*IntTensor*) -- the pivots of the LU factorization from "lu_factor()" of size (*, m), where * is
https://pytorch.org/docs/stable/generated/torch.lu_solve.html
pytorch docs
zero or more batch dimensions. The batch dimensions of "LU_pivots" must be equal to the batch dimensions of "LU_data". Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> A = torch.randn(2, 3, 3) >>> b = torch.randn(2, 3, 1) >>> LU, pivots = torch.linalg.lu_factor(A) >>> x = torch.lu_solve(b, LU, pivots) >>> torch.dist(A @ x, b) tensor(1.00000e-07 * 2.8312)
https://pytorch.org/docs/stable/generated/torch.lu_solve.html
pytorch docs
torch.cuda.comm.broadcast torch.cuda.comm.broadcast(tensor, devices=None, *, out=None) Broadcasts a tensor to specified GPU devices. Parameters: * tensor (Tensor) -- tensor to broadcast. Can be on CPU or GPU. * **devices** (*Iterable**[**torch.device**, **str** or **int**]**, **optional*) -- an iterable of GPU devices, among which to broadcast. * **out** (*Sequence**[**Tensor**]**, **optional**, **keyword- only*) -- the GPU tensors to store output results. Note: Exactly one of "devices" and "out" must be specified. Returns: * If "devices" is specified, a tuple containing copies of "tensor", placed on "devices". * If "out" is specified, a tuple containing "out" tensors, each containing a copy of "tensor".
https://pytorch.org/docs/stable/generated/torch.cuda.comm.broadcast.html
pytorch docs
torch.Tensor.item Tensor.item() -> number Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see "tolist()". This operation is not differentiable. Example: >>> x = torch.tensor([1.0]) >>> x.item() 1.0
https://pytorch.org/docs/stable/generated/torch.Tensor.item.html
pytorch docs
torch.fmod torch.fmod(input, other, *, out=None) -> Tensor Applies C++'s std::fmod entrywise. The result has the same sign as the dividend "input" and its absolute value is less than that of "other". This function may be defined in terms of "torch.div()" as torch.fmod(a, b) == a - a.div(b, rounding_mode="trunc") * b Supports broadcasting to a common shape, type promotion, and integer and float inputs. Note: When the divisor is zero, returns "NaN" for floating point dtypes on both CPU and GPU; raises "RuntimeError" for integer division by zero on CPU; Integer division by zero on GPU may return any value. Note: Complex inputs are not supported. In some cases, it is not mathematically possible to satisfy the definition of a modulo operation with complex numbers. See also: "torch.remainder()" which implements Python's modulus operator. This one is defined using division rounding down the result. Parameters:
https://pytorch.org/docs/stable/generated/torch.fmod.html
pytorch docs
Parameters: * input (Tensor) -- the dividend * **other** (*Tensor** or **Scalar*) -- the divisor Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> torch.fmod(torch.tensor([-3., -2, -1, 1, 2, 3]), 2) tensor([-1., -0., -1., 1., 0., 1.]) >>> torch.fmod(torch.tensor([1, 2, 3, 4, 5]), -1.5) tensor([1.0000, 0.5000, 0.0000, 1.0000, 0.5000])
https://pytorch.org/docs/stable/generated/torch.fmod.html
pytorch docs
torch.argmin torch.argmin(input, dim=None, keepdim=False) -> LongTensor Returns the indices of the minimum value(s) of the flattened tensor or along a dimension This is the second value returned by "torch.min()". See its documentation for the exact semantics of this method. Note: If there are multiple minimal values then the indices of the first minimal value are returned. Parameters: * input (Tensor) -- the input tensor. * **dim** (*int*) -- the dimension to reduce. If "None", the argmin of the flattened input is returned. * **keepdim** (*bool*) -- whether the output tensor has "dim" retained or not.. Example: >>> a = torch.randn(4, 4) >>> a tensor([[ 0.1139, 0.2254, -0.1381, 0.3687], [ 1.0100, -1.1975, -0.0102, -0.4732], [-0.9240, 0.1207, -0.7506, -1.0213], [ 1.7809, -1.2960, 0.9384, 0.1438]]) >>> torch.argmin(a) tensor(13)
https://pytorch.org/docs/stable/generated/torch.argmin.html
pytorch docs
torch.argmin(a) tensor(13) >>> torch.argmin(a, dim=1) tensor([ 2, 1, 3, 1]) >>> torch.argmin(a, dim=1, keepdim=True) tensor([[2], [1], [3], [1]])
https://pytorch.org/docs/stable/generated/torch.argmin.html
pytorch docs
torch.Tensor.type_as Tensor.type_as(tensor) -> Tensor Returns this tensor cast to the type of the given tensor. This is a no-op if the tensor is already of the correct type. This is equivalent to "self.type(tensor.type())" Parameters: tensor (Tensor) -- the tensor which has the desired type
https://pytorch.org/docs/stable/generated/torch.Tensor.type_as.html
pytorch docs
Conv1d class torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None) Applies a 1D convolution over an input signal composed of several input planes. In the simplest case, the output value of the layer with input size (N, C_{\text{in}}, L) and output (N, C_{\text{out}}, L_{\text{out}}) can be precisely described as: \text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k) where \star is the valid cross-correlation operator, N is a batch size, C denotes a number of channels, L is a length of signal sequence. This module supports TensorFloat32. On certain ROCm devices, when using float16 inputs this module will use different precision for backward. "stride" controls the stride for the cross-correlation, a single
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
pytorch docs
number or a one-element tuple. "padding" controls the amount of padding applied to the input. It can be either a string {'valid', 'same'} or a tuple of ints giving the amount of implicit padding applied on both sides. "dilation" controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what "dilation" does. "groups" controls the connections between inputs and outputs. "in_channels" and "out_channels" must both be divisible by "groups". For example, * At groups=1, all inputs are convolved to all outputs. * At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated. * At groups= "in_channels", each input channel is convolved with its own set of filters (of size
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
pytorch docs
with its own set of filters (of size \frac{\text{out_channels}}{\text{in_channels}}). Note: When *groups == in_channels* and *out_channels == K * in_channels*, where *K* is a positive integer, this operation is also known as a "depthwise convolution".In other words, for an input of size (N, C_{in}, L_{in}), a depthwise convolution with a depthwise multiplier *K* can be performed with the arguments (C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in}). Note: In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting "torch.backends.cudnn.deterministic = True". See Reproducibility for more information. Note:
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
pytorch docs
Note: "padding='valid'" is the same as no padding. "padding='same'" pads the input so the output has the shape as the input. However, this mode doesn't support any stride values other than 1. Note: This module supports complex data types i.e. "complex32, complex64, complex128". Parameters: * in_channels (int) -- Number of channels in the input image * **out_channels** (*int*) -- Number of channels produced by the convolution * **kernel_size** (*int** or **tuple*) -- Size of the convolving kernel * **stride** (*int** or **tuple**, **optional*) -- Stride of the convolution. Default: 1 * **padding** (*int**, **tuple** or **str**, **optional*) -- Padding added to both sides of the input. Default: 0 * **padding_mode** (*str**, **optional*) -- "'zeros'", "'reflect'", "'replicate'" or "'circular'". Default: "'zeros'" * **dilation** (*int** or **tuple**, **optional*) -- Spacing
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
pytorch docs
between kernel elements. Default: 1 * **groups** (*int**, **optional*) -- Number of blocked connections from input channels to output channels. Default: 1 * **bias** (*bool**, **optional*) -- If "True", adds a learnable bias to the output. Default: "True" Shape: * Input: (N, C_{in}, L_{in}) or (C_{in}, L_{in}) * Output: (N, C_{out}, L_{out}) or (C_{out}, L_{out}), where L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor Variables: * weight (Tensor) -- the learnable weights of the module of shape (\text{out_channels}, \frac{\text{in_channels}}{\text{groups}}, \text{kernel_size}). The values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = \frac{groups}{C_\text{in} * \text{kernel_size}}
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
pytorch docs
bias (Tensor) -- the learnable bias of the module of shape (out_channels). If "bias" is "True", then the values of these weights are sampled from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = \frac{groups}{C_\text{in} * \text{kernel_size}} Examples: >>> m = nn.Conv1d(16, 33, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html
pytorch docs
JitScalarType class torch.onnx.JitScalarType(value) Scalar types defined in torch. Use "JitScalarType" to convert from torch and JIT scalar types to ONNX scalar types. -[ Examples ]- JitScalarType.from_value(torch.ones(1, 2)).onnx_type() TensorProtoDataType.FLOAT JitScalarType.from_value(torch_c_value_with_type_float).onnx_type() TensorProtoDataType.FLOAT JitScalarType.from_dtype(torch.get_default_dtype).onnx_type() TensorProtoDataType.FLOAT dtype() Convert a JitScalarType to a torch dtype. Return type: *dtype* classmethod from_dtype(dtype) Convert a torch dtype to JitScalarType. Note: DO NOT USE this API when *dtype* comes from a *torch._C.Value.type()* calls. A "RuntimeError: INTERNAL ASSERT FAILED at "../aten/src/ATen/core/jit_type_base.h" can be raised in several scenarios where shape info is not present. Instead use *from_value* API which is safer.
https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html
pytorch docs
use from_value API which is safer. Parameters: **dtype** (*Optional**[**dtype**]*) -- A torch.dtype to create a JitScalarType from Returns: JitScalarType Raises: **OnnxExporterError** -- if dtype is not a valid torch.dtype or if it is None. Return type: *JitScalarType* classmethod from_value(value, default=None) Create a JitScalarType from an value's scalar type. Parameters: * **value** (*Union**[**None**, **Value**, **Tensor**]*) -- An object to fetch scalar type from. * **default** -- The JitScalarType to return if a valid scalar cannot be fetched from value Returns: JitScalarType. Raises: * **OnnxExporterError** -- if value does not have a valid scalar type and default is None. * **SymbolicValueError** -- when value.type()'s info are empty and default is None Return type:
https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html
pytorch docs
Return type: JitScalarType onnx_compatible() Return whether this JitScalarType is compatible with ONNX. Return type: bool onnx_type() Convert a JitScalarType to an ONNX data type. Return type: *TensorProtoDataType* scalar_name() Convert a JitScalarType to a JIT scalar type name. Return type: *Literal*['Byte', 'Char', 'Double', 'Float', 'Half', 'Int', 'Long', 'Short', 'Bool', 'ComplexHalf', 'ComplexFloat', 'ComplexDouble', 'QInt8', 'QUInt8', 'QInt32', 'BFloat16', 'Undefined'] torch_name() Convert a JitScalarType to a torch type name. Return type: *Literal*['bool', 'uint8_t', 'int8_t', 'double', 'float', 'half', 'int', 'int64_t', 'int16_t', 'complex32', 'complex64', 'complex128', 'qint8', 'quint8', 'qint32', 'bfloat16']
https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html
pytorch docs
torch.Tensor.lu Tensor.lu(pivot=True, get_infos=False) See "torch.lu()"
https://pytorch.org/docs/stable/generated/torch.Tensor.lu.html
pytorch docs
torch.foreach_sin torch.foreach_sin(self: List[Tensor]) -> None Apply "torch.sin()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_sin_.html
pytorch docs
torch.cuda.comm.scatter torch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None) Scatters tensor across multiple GPUs. Parameters: * tensor (Tensor) -- tensor to scatter. Can be on CPU or GPU. * **devices** (*Iterable**[**torch.device**, **str** or **int**]**, **optional*) -- an iterable of GPU devices, among which to scatter. * **chunk_sizes** (*Iterable**[**int**]**, **optional*) -- sizes of chunks to be placed on each device. It should match "devices" in length and sums to "tensor.size(dim)". If not specified, "tensor" will be divided into equal chunks. * **dim** (*int**, **optional*) -- A dimension along which to chunk "tensor". Default: "0". * **streams** (*Iterable**[**Stream**]**, **optional*) -- an iterable of Streams, among which to execute the scatter. If not specified, the default stream will be utilized.
https://pytorch.org/docs/stable/generated/torch.cuda.comm.scatter.html
pytorch docs
out (Sequence[Tensor], optional, keyword- only) -- the GPU tensors to store output results. Sizes of these tensors must match that of "tensor", except for "dim", where the total size must sum to "tensor.size(dim)". Note: Exactly one of "devices" and "out" must be specified. When "out" is specified, "chunk_sizes" must not be specified and will be inferred from sizes of "out". Returns: * If "devices" is specified, a tuple containing chunks of "tensor", placed on "devices". * If "out" is specified, a tuple containing "out" tensors, each containing a chunk of "tensor".
https://pytorch.org/docs/stable/generated/torch.cuda.comm.scatter.html
pytorch docs
torch.sparse.log_softmax torch.sparse.log_softmax(input, dim, *, dtype=None) -> Tensor Applies a softmax function followed by logarithm. See "softmax" for more details. Parameters: * input (Tensor) -- input * **dim** (*int*) -- A dimension along which softmax will be computed. * **dtype** ("torch.dtype", optional) -- the desired data type of returned tensor. If specified, the input tensor is casted to "dtype" before the operation is performed. This is useful for preventing data type overflows. Default: None
https://pytorch.org/docs/stable/generated/torch.sparse.log_softmax.html
pytorch docs
torch.Tensor.atan2_ Tensor.atan2_(other) -> Tensor In-place version of "atan2()"
https://pytorch.org/docs/stable/generated/torch.Tensor.atan2_.html
pytorch docs
torch.Tensor.cos Tensor.cos() -> Tensor See "torch.cos()"
https://pytorch.org/docs/stable/generated/torch.Tensor.cos.html
pytorch docs
torch.inner torch.inner(input, other, *, out=None) -> Tensor Computes the dot product for 1D tensors. For higher dimensions, sums the product of elements from "input" and "other" along their last dimension. Note: If either "input" or "other" is a scalar, the result is equivalent to *torch.mul(input, other)*.If both "input" and "other" are non-scalars, the size of their last dimension must match and the result is equivalent to *torch.tensordot(input, other, dims=([-1], [-1]))* Parameters: * input (Tensor) -- First input tensor * **other** (*Tensor*) -- Second input tensor Keyword Arguments: out (Tensor, optional) -- Optional output tensor to write result into. The output shape is input.shape[:-1] + other.shape[:-1]. Example: # Dot product >>> torch.inner(torch.tensor([1, 2, 3]), torch.tensor([0, 2, 1])) tensor(7) # Multidimensional input tensors
https://pytorch.org/docs/stable/generated/torch.inner.html
pytorch docs
Multidimensional input tensors >>> a = torch.randn(2, 3) >>> a tensor([[0.8173, 1.0874, 1.1784], [0.3279, 0.1234, 2.7894]]) >>> b = torch.randn(2, 4, 3) >>> b tensor([[[-0.4682, -0.7159, 0.1506], [ 0.4034, -0.3657, 1.0387], [ 0.9892, -0.6684, 0.1774], [ 0.9482, 1.3261, 0.3917]], [[ 0.4537, 0.7493, 1.1724], [ 0.2291, 0.5749, -0.2267], [-0.7920, 0.3607, -0.3701], [ 1.3666, -0.5850, -1.7242]]]) >>> torch.inner(a, b) tensor([[[-0.9837, 1.1560, 0.2907, 2.6785], [ 2.5671, 0.5452, -0.6912, -1.5509]], [[ 0.1782, 2.9843, 0.7366, 1.5672], [ 3.5115, -0.4864, -1.2476, -4.4337]]]) # Scalar input >>> torch.inner(a, torch.tensor(2)) tensor([[1.6347, 2.1748, 2.3567], [0.6558, 0.2469, 5.5787]])
https://pytorch.org/docs/stable/generated/torch.inner.html
pytorch docs
torch.Tensor.cosh Tensor.cosh() -> Tensor See "torch.cosh()"
https://pytorch.org/docs/stable/generated/torch.Tensor.cosh.html
pytorch docs