text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
torch.Tensor.addmm Tensor.addmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor See "torch.addmm()"
https://pytorch.org/docs/stable/generated/torch.Tensor.addmm.html
pytorch docs
torch.autograd.forward_ad.unpack_dual torch.autograd.forward_ad.unpack_dual(tensor, *, level=None) Unpacks a "dual tensor" to get both its Tensor value and its forward AD gradient. The result is a namedtuple "(primal, tangent)" where "primal" is a view of "tensor"'s primal and "tangent" is "tensor"'s tangent as-is. Neither of these tensors can be dual tensor of level "level". This function is backward differentiable. Example: >>> with dual_level(): ... inp = make_dual(x, x_t) ... out = f(inp) ... y, jvp = unpack_dual(out) ... jvp = unpack_dual(out).tangent Please see the forward-mode AD tutorial for detailed steps on how to use this API.
https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.unpack_dual.html
pytorch docs
torch.nn.functional.normalize torch.nn.functional.normalize(input, p=2.0, dim=1, eps=1e-12, out=None) Performs L_p normalization of inputs over specified dimension. For a tensor "input" of sizes (n_0, ..., n_{dim}, ..., n_k), each n_{dim} -element vector v along dimension "dim" is transformed as v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}. With the default arguments it uses the Euclidean norm over vectors along dimension 1 for normalization. Parameters: * input (Tensor) -- input tensor of any shape * **p** (*float*) -- the exponent value in the norm formulation. Default: 2 * **dim** (*int*) -- the dimension to reduce. Default: 1 * **eps** (*float*) -- small value to avoid division by zero. Default: 1e-12 * **out** (*Tensor**, **optional*) -- the output tensor. If "out" is used, this operation won't be differentiable. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.normalize.html
pytorch docs
torch.nn.functional.conv3d torch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor Applies a 3D convolution over an input image composed of several input planes. This operator supports TensorFloat32. See "Conv3d" for details and output shape. Note: In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting "torch.backends.cudnn.deterministic = True". See Reproducibility for more information. Note: This operator supports complex data types i.e. "complex32, complex64, complex128". Parameters: * input -- input tensor of shape (\text{minibatch} , \text{in_channels} , iT , iH , iW)
https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html
pytorch docs
\text{in_channels} , iT , iH , iW) * **weight** -- filters of shape (\text{out\_channels} , \frac{\text{in\_channels}}{\text{groups}} , kT , kH , kW) * **bias** -- optional bias tensor of shape (\text{out\_channels}). Default: None * **stride** -- the stride of the convolving kernel. Can be a single number or a tuple *(sT, sH, sW)*. Default: 1 * **padding** -- implicit paddings on both sides of the input. Can be a string {'valid', 'same'}, single number or a tuple *(padT, padH, padW)*. Default: 0 "padding='valid'" is the same as no padding. "padding='same'" pads the input so the output has the same shape as the input. However, this mode doesn't support any stride values other than 1. Warning: For "padding='same'", if the "weight" is even-length and "dilation" is odd in any dimension, a full "pad()" operation may be needed internally. Lowering performance.
https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html
pytorch docs
dilation -- the spacing between kernel elements. Can be a single number or a tuple (dT, dH, dW). Default: 1 groups -- split input into groups, \text{in_channels} should be divisible by the number of groups. Default: 1 Examples: >>> filters = torch.randn(33, 16, 3, 3, 3) >>> inputs = torch.randn(20, 16, 50, 10, 20) >>> F.conv3d(inputs, filters)
https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html
pytorch docs
torch.Tensor.is_sparse Tensor.is_sparse Is "True" if the Tensor uses sparse storage layout, "False" otherwise.
https://pytorch.org/docs/stable/generated/torch.Tensor.is_sparse.html
pytorch docs
ReplicationPad3d class torch.nn.ReplicationPad3d(padding) Pads the input tensor using replication of the input boundary. For N-dimensional padding, use "torch.nn.functional.pad()". Parameters: padding (int, tuple) -- the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (\text{padding_left}, \text{padding_right}, \text{padding_top}, \text{padding_bottom}, \text{padding_front}, \text{padding_back}) Shape: * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in}, W_{in}). * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out}, H_{out}, W_{out}), where D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back} H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom} W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right} Examples: >>> m = nn.ReplicationPad3d(3)
https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad3d.html
pytorch docs
Examples: >>> m = nn.ReplicationPad3d(3) >>> input = torch.randn(16, 3, 8, 320, 480) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad3d.html
pytorch docs
torch.nn.functional.gaussian_nll_loss torch.nn.functional.gaussian_nll_loss(input, target, var, full=False, eps=1e-06, reduction='mean') Gaussian negative log likelihood loss. See "GaussianNLLLoss" for details. Parameters: * input (Tensor) -- expectation of the Gaussian distribution. * **target** (*Tensor*) -- sample from the Gaussian distribution. * **var** (*Tensor*) -- tensor of positive variance(s), one for each of the expectations in the input (heteroscedastic), or a single one (homoscedastic). * **full** (*bool**, **optional*) -- include the constant term in the loss calculation. Default: "False". * **eps** (*float**, **optional*) -- value added to var, for stability. Default: 1e-6. * **reduction** (*str**, **optional*) -- specifies the reduction to apply to the output: "'none'" | "'mean'" | "'sum'".
https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html
pytorch docs
"'none'": no reduction will be applied, "'mean'": the output is the average of all batch member losses, "'sum'": the output is the sum of all batch member losses. Default: "'mean'". Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html
pytorch docs
torch.foreach_round torch.foreach_round(self: List[Tensor]) -> None Apply "torch.round()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_round_.html
pytorch docs
torch.frac torch.frac(input, *, out=None) -> Tensor Computes the fractional portion of each element in "input". \text{out}_{i} = \text{input}_{i} - \left\lfloor |\text{input}_{i}| \right\rfloor * \operatorname{sgn}(\text{input}_{i}) Example: >>> torch.frac(torch.tensor([1, 2.5, -3.2])) tensor([ 0.0000, 0.5000, -0.2000])
https://pytorch.org/docs/stable/generated/torch.frac.html
pytorch docs
adaptive_avg_pool3d class torch.ao.nn.quantized.functional.adaptive_avg_pool3d(input, output_size) Applies a 3D adaptive average pooling over a quantized input signal composed of several quantized input planes. Note: The input quantization parameters propagate to the output. See "AdaptiveAvgPool3d" for details and output shape. Parameters: output_size (None) -- the target output size (single integer or double-integer tuple) Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.adaptive_avg_pool3d.html
pytorch docs
torch.smm torch.smm(input, mat) -> Tensor Performs a matrix multiplication of the sparse matrix "input" with the dense matrix "mat". Parameters: * input (Tensor) -- a sparse matrix to be matrix multiplied * **mat** (*Tensor*) -- a dense matrix to be matrix multiplied
https://pytorch.org/docs/stable/generated/torch.smm.html
pytorch docs
torch.Tensor.erf Tensor.erf() -> Tensor See "torch.erf()"
https://pytorch.org/docs/stable/generated/torch.Tensor.erf.html
pytorch docs
torch.fft.rfftfreq torch.fft.rfftfreq(n, d=1.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor Computes the sample frequencies for "rfft()" with a signal of size "n". Note: "rfft()" returns Hermitian one-sided output, so only the positive frequency terms are returned. For a real FFT of length "n" and with inputs spaced in length unit "d", the frequencies are: f = torch.arange((n + 1) // 2) / (d * n) Note: For even lengths, the Nyquist frequency at "f[n/2]" can be thought of as either negative or positive. Unlike "fftfreq()", "rfftfreq()" always returns it as positive. Parameters: * n (int) -- the real FFT length * **d** (*float**, **optional*) -- The sampling length scale. The spacing between individual samples of the FFT input. The default assumes unit spacing, dividing that result by the
https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html
pytorch docs
actual spacing gives the result in physical frequency units. Keyword Arguments: * out (Tensor, optional) -- the output tensor. * **dtype** ("torch.dtype", optional) -- the desired data type of returned tensor. Default: if "None", uses a global default (see "torch.set_default_tensor_type()"). * **layout** ("torch.layout", optional) -- the desired layout of returned Tensor. Default: "torch.strided". * **device** ("torch.device", optional) -- the desired device of returned tensor. Default: if "None", uses the current device for the default tensor type (see "torch.set_default_tensor_type()"). "device" will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types. * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False". -[ Example ]- torch.fft.rfftfreq(5)
https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html
pytorch docs
-[ Example ]- torch.fft.rfftfreq(5) tensor([0.0000, 0.2000, 0.4000]) torch.fft.rfftfreq(4) tensor([0.0000, 0.2500, 0.5000]) Compared to the output from "fftfreq()", we see that the Nyquist frequency at "f[2]" has changed sign: >>> torch.fft.fftfreq(4) tensor([ 0.0000, 0.2500, -0.5000, -0.2500])
https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html
pytorch docs
GRU class torch.nn.GRU(args, *kwargs) Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence. For each element in the input sequence, each layer computes the following function: \begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array} where h_t is the hidden state at time t, x_t is the input at time t, h_{(t-1)} is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and r_t, z_t, n_t are the reset, update, and new gates, respectively. \sigma is the sigmoid function, and * is the Hadamard product. In a multilayer GRU, the input x^{(l)}_t of the l -th layer (l >= 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied
https://pytorch.org/docs/stable/generated/torch.nn.GRU.html
pytorch docs
by dropout \delta^{(l-1)}_t where each \delta^{(l-1)}_t is a Bernoulli random variable which is 0 with probability "dropout". Parameters: * input_size -- The number of expected features in the input x * **hidden_size** -- The number of features in the hidden state *h* * **num_layers** -- Number of recurrent layers. E.g., setting "num_layers=2" would mean stacking two GRUs together to form a *stacked GRU*, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 * **bias** -- If "False", then the layer does not use bias weights *b_ih* and *b_hh*. Default: "True" * **batch_first** -- If "True", then the input and output tensors are provided as *(batch, seq, feature)* instead of *(seq, batch, feature)*. Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default: "False"
https://pytorch.org/docs/stable/generated/torch.nn.GRU.html
pytorch docs
for details. Default: "False" * **dropout** -- If non-zero, introduces a *Dropout* layer on the outputs of each GRU layer except the last layer, with dropout probability equal to "dropout". Default: 0 * **bidirectional** -- If "True", becomes a bidirectional GRU. Default: "False" Inputs: input, h_0 * input: tensor of shape (L, H_{in}) for unbatched input, (L, N, H_{in}) when "batch_first=False" or (N, L, H_{in}) when "batch_first=True" containing the features of the input sequence. The input can also be a packed variable length sequence. See "torch.nn.utils.rnn.pack_padded_sequence()" or "torch.nn.utils.rnn.pack_sequence()" for details. * **h_0**: tensor of shape (D * \text{num\_layers}, H_{out}) or (D * \text{num\_layers}, N, H_{out}) containing the initial hidden state for the input sequence. Defaults to zeros if not provided. where:
https://pytorch.org/docs/stable/generated/torch.nn.GRU.html
pytorch docs
provided. where: \begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned} Outputs: output, h_n * output: tensor of shape (L, D * H_{out}) for unbatched input, (L, N, D * H_{out}) when "batch_first=False" or (N, L, D * H_{out}) when "batch_first=True" containing the output features (h_t) from the last layer of the GRU, for each t. If a "torch.nn.utils.rnn.PackedSequence" has been given as the input, the output will also be a packed sequence. * **h_n**: tensor of shape (D * \text{num\_layers}, H_{out}) or (D * \text{num\_layers}, N, H_{out}) containing the final hidden state for the input sequence. Variables: * weight_ih_l[k] -- the learnable input-hidden weights of
https://pytorch.org/docs/stable/generated/torch.nn.GRU.html
pytorch docs
the \text{k}^{th} layer (W_ir|W_iz|W_in), of shape (3hidden_size, input_size) for k = 0. Otherwise, the shape is (3hidden_size, num_directions * hidden_size) * **weight_hh_l[k]** -- the learnable hidden-hidden weights of the \text{k}^{th} layer (W_hr|W_hz|W_hn), of shape *(3*hidden_size, hidden_size)* * **bias_ih_l[k]** -- the learnable input-hidden bias of the \text{k}^{th} layer (b_ir|b_iz|b_in), of shape *(3*hidden_size)* * **bias_hh_l[k]** -- the learnable hidden-hidden bias of the \text{k}^{th} layer (b_hr|b_hz|b_hn), of shape *(3*hidden_size)* Note: All the weights and biases are initialized from \mathcal{U}(-\sqrt{k}, \sqrt{k}) where k = \frac{1}{\text{hidden\_size}} Note: For bidirectional GRUs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when "batch_first=False": "output.view(seq_len, batch, num_directions,
https://pytorch.org/docs/stable/generated/torch.nn.GRU.html
pytorch docs
hidden_size)". Note: "batch_first" argument is ignored for unbatched inputs. Note: If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype "torch.float16" 4) V100 GPU is used, 5) input data is not in "PackedSequence" format persistent algorithm can be selected to improve performance. Examples: >>> rnn = nn.GRU(10, 20, 2) >>> input = torch.randn(5, 3, 10) >>> h0 = torch.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
https://pytorch.org/docs/stable/generated/torch.nn.GRU.html
pytorch docs
SequentialLR class torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=- 1, verbose=False) Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to reflect which scheduler is supposed to be called at a given epoch. Parameters: * optimizer (Optimizer) -- Wrapped optimizer. * **schedulers** (*list*) -- List of chained schedulers. * **milestones** (*list*) -- List of integers that reflects milestone points. * **last_epoch** (*int*) -- The index of last epoch. Default: -1. * **verbose** (*bool*) -- Does nothing. -[ Example ]- Assuming optimizer uses lr = 1. for all groups lr = 0.1 if epoch == 0 lr = 0.1 if epoch == 1 lr = 0.9 if epoch == 2 lr = 0.81 if epoch == 3 lr = 0.729 if epoch == 4
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html
pytorch docs
lr = 0.729 if epoch == 4 scheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2) scheduler2 = ExponentialLR(self.opt, gamma=0.9) scheduler = SequentialLR(self.opt, schedulers=[scheduler1, scheduler2], milestones=[2]) for epoch in range(100): train(...) validate(...) scheduler.step() get_last_lr() Return last computed learning rate by current scheduler. load_state_dict(state_dict) Loads the schedulers state. Parameters: **state_dict** (*dict*) -- scheduler state. Should be an object returned from a call to "state_dict()". print_lr(is_verbose, group, lr, epoch=None) Display the current learning rate. state_dict() Returns the state of the scheduler as a "dict". It contains an entry for every variable in self.__dict__ which is not the optimizer. The wrapped scheduler states will also be saved.
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html
pytorch docs
torch.Tensor.argwhere Tensor.argwhere() -> Tensor See "torch.argwhere()"
https://pytorch.org/docs/stable/generated/torch.Tensor.argwhere.html
pytorch docs
torch.Tensor.addcdiv Tensor.addcdiv(tensor1, tensor2, *, value=1) -> Tensor See "torch.addcdiv()"
https://pytorch.org/docs/stable/generated/torch.Tensor.addcdiv.html
pytorch docs
torch.floor_divide torch.floor_divide(input, other, *, out=None) -> Tensor Note: Before PyTorch 1.13 "torch.floor_divide()" incorrectly performed truncation division. To restore the previous behavior use "torch.div()" with "rounding_mode='trunc'". Computes "input" divided by "other", elementwise, and floors the result. \text{{out}}_i = \text{floor} \left( \frac{{\text{{input}}_i}}{{\text{{other}}_i}} \right) Supports broadcasting to a common shape, type promotion, and integer and float inputs. Parameters: * input (Tensor or Number) -- the dividend * **other** (*Tensor** or **Number*) -- the divisor Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> a = torch.tensor([4.0, 3.0]) >>> b = torch.tensor([2.0, 2.0]) >>> torch.floor_divide(a, b) tensor([2.0, 1.0]) >>> torch.floor_divide(a, 1.4) tensor([2.0, 2.0])
https://pytorch.org/docs/stable/generated/torch.floor_divide.html
pytorch docs
torch.get_float32_matmul_precision torch.get_float32_matmul_precision() Returns the current value of float32 matrix multiplication precision. Refer to "torch.set_float32_matmul_precision()" documentation for more details. Return type: str
https://pytorch.org/docs/stable/generated/torch.get_float32_matmul_precision.html
pytorch docs
prepare_qat_fx class torch.quantization.quantize_fx.prepare_qat_fx(model, qconfig_mapping, example_inputs, prepare_custom_config=None, backend_config=None) Prepare a model for quantization aware training Parameters: * model (***) -- torch.nn.Module model * **qconfig_mapping** (***) -- see "prepare_fx()" * **example_inputs** (***) -- see "prepare_fx()" * **prepare_custom_config** (***) -- see "prepare_fx()" * **backend_config** (***) -- see "prepare_fx()" Returns: A GraphModule with fake quant modules (configured by qconfig_mapping and backend_config), ready for quantization aware training Return type: ObservedGraphModule Example: import torch from torch.ao.quantization import get_default_qat_qconfig_mapping from torch.ao.quantization import prepare_fx class Submodule(torch.nn.Module): def __init__(self): super().__init__()
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html
pytorch docs
super().init() self.linear = torch.nn.Linear(5, 5) def forward(self, x): x = self.linear(x) return x class M(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(5, 5) self.sub = Submodule() def forward(self, x): x = self.linear(x) x = self.sub(x) + x return x # initialize a floating point model float_model = M().train() # (optional, but preferred) load the weights from pretrained model # float_model.load_weights(...) # define the training loop for quantization aware training def train_loop(model, train_data): model.train() for image, target in data_loader: ... # qconfig is the configuration for how we insert observers for a particular # operator # qconfig = get_default_qconfig("fbgemm")
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html
pytorch docs
qconfig = get_default_qconfig("fbgemm") # Example of customizing qconfig: # qconfig = torch.ao.quantization.QConfig( # activation=FakeQuantize.with_args(observer=MinMaxObserver.with_args(dtype=torch.qint8)), # weight=FakeQuantize.with_args(observer=MinMaxObserver.with_args(dtype=torch.qint8))) # `activation` and `weight` are constructors of observer module # qconfig_mapping is a collection of quantization configurations, user can # set the qconfig for each operator (torch op calls, functional calls, module calls) # in the model through qconfig_mapping # the following call will get the qconfig_mapping that works best for models # that target "fbgemm" backend qconfig_mapping = get_default_qat_qconfig("fbgemm") # We can customize qconfig_mapping in different ways, please take a look at # the docstring for :func:`~torch.ao.quantization.prepare_fx` for different ways # to configure this
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html
pytorch docs
to configure this # example_inputs is a tuple of inputs, that is used to infer the type of the # outputs in the model # currently it's not used, but please make sure model(*example_inputs) runs example_inputs = (torch.randn(1, 3, 224, 224),) # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack # e.g. backend_config = get_default_backend_config("fbgemm") # `prepare_qat_fx` inserts observers in the model based on qconfig_mapping and # backend_config, if the configuration for an operator in qconfig_mapping # is supported in the backend_config (meaning it's supported by the target # hardware), we'll insert fake_quantize modules according to the qconfig_mapping # otherwise the configuration in qconfig_mapping will be ignored # see :func:`~torch.ao.quantization.prepare_fx` for a detailed explanation of # how qconfig_mapping interacts with backend_config
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html
pytorch docs
prepared_model = prepare_qat_fx(float_model, qconfig_mapping, example_inputs) # Run training train_loop(prepared_model, train_loop)
https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html
pytorch docs
torch.Tensor.std Tensor.std(dim=None, *, correction=1, keepdim=False) -> Tensor See "torch.std()"
https://pytorch.org/docs/stable/generated/torch.Tensor.std.html
pytorch docs
BNReLU3d class torch.ao.nn.intrinsic.quantized.BNReLU3d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None) A BNReLU3d module is a fused module of BatchNorm3d and ReLU We adopt the same interface as "torch.ao.nn.quantized.BatchNorm3d". Variables: torch.ao.nn.quantized.BatchNorm3d (Same as) --
https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.BNReLU3d.html
pytorch docs
torch.Tensor.sign_ Tensor.sign_() -> Tensor In-place version of "sign()"
https://pytorch.org/docs/stable/generated/torch.Tensor.sign_.html
pytorch docs
torch.Tensor.floor Tensor.floor() -> Tensor See "torch.floor()"
https://pytorch.org/docs/stable/generated/torch.Tensor.floor.html
pytorch docs
torch.normal torch.normal(mean, std, *, generator=None, out=None) -> Tensor Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. The "mean" is a tensor with the mean of each output element's normal distribution The "std" is a tensor with the standard deviation of each output element's normal distribution The shapes of "mean" and "std" don't need to match, but the total number of elements in each tensor need to be the same. Note: When the shapes do not match, the shape of "mean" is used as the shape for the returned output tensor Note: When "std" is a CUDA tensor, this function synchronizes its device with the CPU. Parameters: * mean (Tensor) -- the tensor of per-element means * **std** (*Tensor*) -- the tensor of per-element standard deviations Keyword Arguments: * generator ("torch.Generator", optional) -- a pseudorandom
https://pytorch.org/docs/stable/generated/torch.normal.html
pytorch docs
number generator for sampling * **out** (*Tensor**, **optional*) -- the output tensor. Example: >>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1)) tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134, 8.0505, 8.1408, 9.0563, 10.0566]) torch.normal(mean=0.0, std, *, out=None) -> Tensor Similar to the function above, but the means are shared among all drawn elements. Parameters: * mean (float, optional) -- the mean for all distributions * **std** (*Tensor*) -- the tensor of per-element standard deviations Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> torch.normal(mean=0.5, std=torch.arange(1., 6.)) tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303]) torch.normal(mean, std=1.0, *, out=None) -> Tensor Similar to the function above, but the standard deviations are
https://pytorch.org/docs/stable/generated/torch.normal.html
pytorch docs
shared among all drawn elements. Parameters: * mean (Tensor) -- the tensor of per-element means * **std** (*float**, **optional*) -- the standard deviation for all distributions Keyword Arguments: out (Tensor, optional) -- the output tensor Example: >>> torch.normal(mean=torch.arange(1., 6.)) tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361]) torch.normal(mean, std, size, *, out=None) -> Tensor Similar to the function above, but the means and standard deviations are shared among all drawn elements. The resulting tensor has size given by "size". Parameters: * mean (float) -- the mean for all distributions * **std** (*float*) -- the standard deviation for all distributions * **size** (*int**...*) -- a sequence of integers defining the shape of the output tensor. Keyword Arguments: out (Tensor, optional) -- the output tensor. Example:
https://pytorch.org/docs/stable/generated/torch.normal.html
pytorch docs
Example: >>> torch.normal(2, 3, size=(1, 4)) tensor([[-1.3987, -1.9544, 3.6048, 0.7909]])
https://pytorch.org/docs/stable/generated/torch.normal.html
pytorch docs
RNNCell class torch.ao.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8) An Elman RNN cell with tanh or ReLU non-linearity. A dynamic quantized RNNCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.RNNCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.RNNCell for documentation. Examples: >>> rnn = nn.RNNCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): ... hx = rnn(input[i], hx) ... output.append(hx)
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.RNNCell.html
pytorch docs
torch.Tensor.cpu Tensor.cpu(memory_format=torch.preserve_format) -> Tensor Returns a copy of this object in CPU memory. If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned. Parameters: memory_format ("torch.memory_format", optional) -- the desired memory format of returned Tensor. Default: "torch.preserve_format".
https://pytorch.org/docs/stable/generated/torch.Tensor.cpu.html
pytorch docs
torch.select torch.select(input, dim, index) -> Tensor Slices the "input" tensor along the selected dimension at the given index. This function returns a view of the original tensor with the given dimension removed. Note: If "input" is a sparse tensor and returning a view of the tensor is not possible, a RuntimeError exception is raised. In this is the case, consider using "torch.select_copy()" function. Parameters: * input (Tensor) -- the input tensor. * **dim** (*int*) -- the dimension to slice * **index** (*int*) -- the index to select with Note: "select()" is equivalent to slicing. For example, "tensor.select(0, index)" is equivalent to "tensor[index]" and "tensor.select(2, index)" is equivalent to "tensor[:,:,index]".
https://pytorch.org/docs/stable/generated/torch.select.html
pytorch docs
torch.cuda.current_blas_handle torch.cuda.current_blas_handle() Returns cublasHandle_t pointer to current cuBLAS handle
https://pytorch.org/docs/stable/generated/torch.cuda.current_blas_handle.html
pytorch docs
torch.Tensor.int Tensor.int(memory_format=torch.preserve_format) -> Tensor "self.int()" is equivalent to "self.to(torch.int32)". See "to()". Parameters: memory_format ("torch.memory_format", optional) -- the desired memory format of returned Tensor. Default: "torch.preserve_format".
https://pytorch.org/docs/stable/generated/torch.Tensor.int.html
pytorch docs
torch.Tensor.erfc Tensor.erfc() -> Tensor See "torch.erfc()"
https://pytorch.org/docs/stable/generated/torch.Tensor.erfc.html
pytorch docs
torch.Tensor.abs Tensor.abs() -> Tensor See "torch.abs()"
https://pytorch.org/docs/stable/generated/torch.Tensor.abs.html
pytorch docs
torch.Tensor.scatter Tensor.scatter(dim, index, src) -> Tensor Out-of-place version of "torch.Tensor.scatter_()"
https://pytorch.org/docs/stable/generated/torch.Tensor.scatter.html
pytorch docs
torch.nn.functional.soft_margin_loss torch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor See "SoftMarginLoss" for details. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.soft_margin_loss.html
pytorch docs
torch.from_dlpack torch.from_dlpack(ext_tensor) -> Tensor Converts a tensor from an external library into a "torch.Tensor". The returned PyTorch tensor will share the memory with the input tensor (which may have come from another library). Note that in- place operations will therefore also affect the data of the input tensor. This may lead to unexpected issues (e.g., other libraries may have read-only flags or immutable data structures), so the user should only do this if they know for sure that this is fine. Parameters: ext_tensor (object with "dlpack" attribute, or a DLPack capsule) -- The tensor or DLPack capsule to convert. If "ext_tensor" is a tensor (or ndarray) object, it must support the "__dlpack__" protocol (i.e., have a "ext_tensor.__dlpack__" method). Otherwise "ext_tensor" may be a DLPack capsule, which is an opaque "PyCapsule" instance, typically produced by a "to_dlpack" function or method.
https://pytorch.org/docs/stable/generated/torch.from_dlpack.html
pytorch docs
"to_dlpack" function or method. Return type: Tensor Examples: >>> import torch.utils.dlpack >>> t = torch.arange(4) # Convert a tensor directly (supported in PyTorch >= 1.10) >>> t2 = torch.from_dlpack(t) >>> t2[:2] = -1 # show that memory is shared >>> t2 tensor([-1, -1, 2, 3]) >>> t tensor([-1, -1, 2, 3]) # The old-style DLPack usage, with an intermediate capsule object >>> capsule = torch.utils.dlpack.to_dlpack(t) >>> capsule <capsule object "dltensor" at ...> >>> t3 = torch.from_dlpack(capsule) >>> t3 tensor([-1, -1, 2, 3]) >>> t3[0] = -9 # now we're sharing memory between 3 tensors >>> t3 tensor([-9, -1, 2, 3]) >>> t2 tensor([-9, -1, 2, 3]) >>> t tensor([-9, -1, 2, 3])
https://pytorch.org/docs/stable/generated/torch.from_dlpack.html
pytorch docs
torch.Tensor.is_set_to Tensor.is_set_to(tensor) -> bool Returns True if both tensors are pointing to the exact same memory (same storage, offset, size and stride).
https://pytorch.org/docs/stable/generated/torch.Tensor.is_set_to.html
pytorch docs
DTypeWithConstraints class torch.ao.quantization.backend_config.DTypeWithConstraints(dtype=None, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None) Config for specifying additional constraints for a given dtype, such as quantization value ranges, scale value ranges, and fixed quantization params, to be used in "DTypeConfig". The constraints currently supported are: quant_min_lower_bound and quant_max_upper_bound: Lower and upper bounds for the minimum and maximum quantized values respectively. If the QConfig’s quant_min and quant_max fall outside this range, then the QConfig will be ignored. scale_min_lower_bound and scale_max_upper_bound: Lower and upper bounds for the minimum and maximum scale values respectively. If the QConfig’s minimum scale value (currently
https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeWithConstraints.html
pytorch docs
exposed as eps) falls below the lower bound, then the QConfig will be ignored. Note that the upper bound is currently not enforced. scale_exact_match and zero_point_exact_match: Exact match requirements for scale and zero point, to be used for operators with fixed quantization parameters such as sigmoid and tanh. If the observer specified in the QConfig is neither FixedQParamsObserver nor FixedQParamsFakeQuantize, or if the quantization parameters don't match, then the QConfig will be ignored.
https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeWithConstraints.html
pytorch docs
Hardtanh class torch.nn.Hardtanh(min_val=- 1.0, max_val=1.0, inplace=False, min_value=None, max_value=None) Applies the HardTanh function element-wise. HardTanh is defined as: \text{HardTanh}(x) = \begin{cases} \text{max\_val} & \text{ if } x > \text{ max\_val } \\ \text{min\_val} & \text{ if } x < \text{ min\_val } \\ x & \text{ otherwise } \\ \end{cases} Parameters: * min_val (float) -- minimum value of the linear region range. Default: -1 * **max_val** (*float*) -- maximum value of the linear region range. Default: 1 * **inplace** (*bool*) -- can optionally do the operation in- place. Default: "False" Keyword arguments "min_value" and "max_value" have been deprecated in favor of "min_val" and "max_val". Shape: * Input: (*), where * means any number of dimensions. * Output: (*), same shape as the input. [image] Examples: >>> m = nn.Hardtanh(-2, 2)
https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html
pytorch docs
Examples: >>> m = nn.Hardtanh(-2, 2) >>> input = torch.randn(2) >>> output = m(input)
https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html
pytorch docs
ConvBn2d class torch.ao.nn.intrinsic.qat.ConvBn2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None) A ConvBn2d module is a module fused from Conv2d and BatchNorm2d, attached with FakeQuantize modules for weight, used in quantization aware training. We combined the interface of "torch.nn.Conv2d" and "torch.nn.BatchNorm2d". Similar to "torch.nn.Conv2d", with FakeQuantize modules initialized to default. Variables: * freeze_bn -- * **weight_fake_quant** -- fake quant module for weight
https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn2d.html
pytorch docs
torch.where torch.where(condition, input, other, *, out=None) -> Tensor Return a tensor of elements selected from either "input" or "other", depending on "condition". The operation is defined as: \text{out}_i = \begin{cases} \text{input}_i & \text{if } \text{condition}_i \\ \text{other}_i & \text{otherwise} \\ \end{cases} Note: The tensors "condition", "input", "other" must be broadcastable. Parameters: * condition (BoolTensor) -- When True (nonzero), yield input, otherwise yield other * **input** (*Tensor** or **Scalar*) -- value (if "input" is a scalar) or values selected at indices where "condition" is "True" * **other** (*Tensor** or **Scalar*) -- value (if "other" is a scalar) or values selected at indices where "condition" is "False" Keyword Arguments: out (Tensor, optional) -- the output tensor. Returns:
https://pytorch.org/docs/stable/generated/torch.where.html
pytorch docs
Returns: A tensor of shape equal to the broadcasted shape of "condition", "input", "other" Return type: Tensor Example: >>> x = torch.randn(3, 2) >>> y = torch.ones(3, 2) >>> x tensor([[-0.4620, 0.3139], [ 0.3898, -0.7197], [ 0.0478, -0.1657]]) >>> torch.where(x > 0, x, y) tensor([[ 1.0000, 0.3139], [ 0.3898, 1.0000], [ 0.0478, 1.0000]]) >>> x = torch.randn(2, 2, dtype=torch.double) >>> x tensor([[ 1.0779, 0.0383], [-0.8785, -1.1089]], dtype=torch.float64) >>> torch.where(x > 0, x, 0.) tensor([[1.0779, 0.0383], [0.0000, 0.0000]], dtype=torch.float64) torch.where(condition) -> tuple of LongTensor "torch.where(condition)" is identical to "torch.nonzero(condition, as_tuple=True)". Note: See also "torch.nonzero()".
https://pytorch.org/docs/stable/generated/torch.where.html
pytorch docs
torch.Tensor.clamp_ Tensor.clamp_(min=None, max=None) -> Tensor In-place version of "clamp()"
https://pytorch.org/docs/stable/generated/torch.Tensor.clamp_.html
pytorch docs
torch.Tensor.le Tensor.le(other) -> Tensor See "torch.le()".
https://pytorch.org/docs/stable/generated/torch.Tensor.le.html
pytorch docs
GRUCell class torch.ao.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8) A gated recurrent unit (GRU) cell A dynamic quantized GRUCell module with floating point tensor as inputs and outputs. Weights are quantized to 8 bits. We adopt the same interface as torch.nn.GRUCell, please see https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell for documentation. Examples: >>> rnn = nn.GRUCell(10, 20) >>> input = torch.randn(6, 3, 10) >>> hx = torch.randn(3, 20) >>> output = [] >>> for i in range(6): ... hx = rnn(input[i], hx) ... output.append(hx)
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRUCell.html
pytorch docs
torch.Tensor.narrow Tensor.narrow(dimension, start, length) -> Tensor See "torch.narrow()".
https://pytorch.org/docs/stable/generated/torch.Tensor.narrow.html
pytorch docs
PerChannelMinMaxObserver class torch.quantization.observer.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07) Observer module for computing the quantization parameters based on the running per channel min and max values. This observer uses the tensor min/max statistics to compute the per channel quantization parameters. The module records the running minimum and maximum of incoming tensors, and uses this statistic to compute the quantization parameters. Parameters: * ch_axis -- Channel axis * **dtype** -- dtype argument to the *quantize* node needed to implement the reference model spec. * **qscheme** -- Quantization scheme to be used * **reduce_range** -- Reduces the range of the quantized data type by 1 bit
https://pytorch.org/docs/stable/generated/torch.quantization.observer.PerChannelMinMaxObserver.html
pytorch docs
type by 1 bit * **quant_min** -- Minimum quantization value. If unspecified, it will follow the 8-bit setup. * **quant_max** -- Maximum quantization value. If unspecified, it will follow the 8-bit setup. * **eps** (*Tensor*) -- Epsilon value for float32, Defaults to *torch.finfo(torch.float32).eps*. The quantization parameters are computed the same way as in "MinMaxObserver", with the difference that the running min/max values are stored per channel. Scales and zero points are thus computed per channel as well. Note: If the running minimum equals to the running maximum, the scales and zero_points are set to 1.0 and 0. reset_min_max_vals() Resets the min/max values.
https://pytorch.org/docs/stable/generated/torch.quantization.observer.PerChannelMinMaxObserver.html
pytorch docs
torch.blackman_window torch.blackman_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor Blackman window function. w[n] = 0.42 - 0.5 \cos \left( \frac{2 \pi n}{N - 1} \right) + 0.08 \cos \left( \frac{4 \pi n}{N - 1} \right) where N is the full window size. The input "window_length" is a positive integer controlling the returned window size. "periodic" flag determines whether the returned window trims off the last duplicate value from the symmetric window and is ready to be used as a periodic window with functions like "torch.stft()". Therefore, if "periodic" is true, the N in above formula is in fact \text{window_length} + 1. Also, we always have "torch.blackman_window(L, periodic=True)" equal to "torch.blackman_window(L + 1, periodic=False)[:-1])". Note: If "window_length" =1, the returned window contains a single value 1. Parameters:
https://pytorch.org/docs/stable/generated/torch.blackman_window.html
pytorch docs
value 1. Parameters: * window_length (int) -- the size of returned window * **periodic** (*bool**, **optional*) -- If True, returns a window to be used as periodic function. If False, return a symmetric window. Keyword Arguments: * dtype ("torch.dtype", optional) -- the desired data type of returned tensor. Default: if "None", uses a global default (see "torch.set_default_tensor_type()"). Only floating point types are supported. * **layout** ("torch.layout", optional) -- the desired layout of returned window tensor. Only "torch.strided" (dense layout) is supported. * **device** ("torch.device", optional) -- the desired device of returned tensor. Default: if "None", uses the current device for the default tensor type (see "torch.set_default_tensor_type()"). "device" will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
https://pytorch.org/docs/stable/generated/torch.blackman_window.html
pytorch docs
tensor types. * **requires_grad** (*bool**, **optional*) -- If autograd should record operations on the returned tensor. Default: "False". Returns: A 1-D tensor of size (\text{window_length},) containing the window Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.blackman_window.html
pytorch docs
torch.Tensor.svd Tensor.svd(some=True, compute_uv=True) See "torch.svd()"
https://pytorch.org/docs/stable/generated/torch.Tensor.svd.html
pytorch docs
torch.cuda.stream torch.cuda.stream(stream) Wrapper around the Context-manager StreamContext that selects a given stream. Parameters: stream (Stream) -- selected stream. This manager is a no- op if it's "None". Return type: StreamContext ..Note:: In eager mode stream is of type Stream class while in JIT it is an object of the custom class "torch.classes.cuda.Stream".
https://pytorch.org/docs/stable/generated/torch.cuda.stream.html
pytorch docs
torch.Tensor.log_ Tensor.log_() -> Tensor In-place version of "log()"
https://pytorch.org/docs/stable/generated/torch.Tensor.log_.html
pytorch docs
device_of class torch.cuda.device_of(obj) Context-manager that changes the current device to that of given object. You can use both tensors and storages as arguments. If a given object is not allocated on a GPU, this is a no-op. Parameters: obj (Tensor or Storage) -- object allocated on the selected device.
https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html
pytorch docs
torch.histogram torch.histogram(input, bins, *, range=None, weight=None, density=False, out=None) Computes a histogram of the values in a tensor. "bins" can be an integer or a 1D tensor. If "bins" is an int, it specifies the number of equal-width bins. By default, the lower and upper range of the bins is determined by the minimum and maximum elements of the input tensor. The "range" argument can be provided to specify a range for the bins. If "bins" is a 1D tensor, it specifies the sequence of bin edges including the rightmost edge. It should contain at least 2 elements and its elements should be increasing. Parameters: * input (Tensor) -- the input tensor. * **bins** -- int or 1D Tensor. If int, defines the number of equal-width bins. If tensor, defines the sequence of bin edges including the rightmost edge. Keyword Arguments: * range (tuple of python:float) -- Defines the range of the bins.
https://pytorch.org/docs/stable/generated/torch.histogram.html
pytorch docs
the bins. * **weight** (*Tensor*) -- If provided, weight should have the same shape as input. Each value in input contributes its associated weight towards its bin's result. * **density** (*bool*) -- If False, the result will contain the count (or total weight) in each bin. If True, the result is the value of the probability density function over the bins, normalized such that the integral over the range of the bins is 1. * **out** (*Tensor**, **optional*) -- the output tensor. (tuple, optional): The result tuple of two output tensors (hist, bin_edges). Returns: 1D Tensor containing the values of the histogram. bin_edges(Tensor): 1D Tensor containing the edges of the histogram bins. Return type: hist (Tensor) Example: >>> torch.histogram(torch.tensor([1., 2, 1]), bins=4, range=(0., 3.), weight=torch.tensor([1., 2., 4.]))
https://pytorch.org/docs/stable/generated/torch.histogram.html
pytorch docs
(tensor([ 0., 5., 2., 0.]), tensor([0., 0.75, 1.5, 2.25, 3.])) >>> torch.histogram(torch.tensor([1., 2, 1]), bins=4, range=(0., 3.), weight=torch.tensor([1., 2., 4.]), density=True) (tensor([ 0., 0.9524, 0.3810, 0.]), tensor([0., 0.75, 1.5, 2.25, 3.]))
https://pytorch.org/docs/stable/generated/torch.histogram.html
pytorch docs
torch.Tensor.arctan Tensor.arctan() -> Tensor See "torch.arctan()"
https://pytorch.org/docs/stable/generated/torch.Tensor.arctan.html
pytorch docs
torch.polygamma torch.polygamma(n, input, *, out=None) -> Tensor Alias for "torch.special.polygamma()".
https://pytorch.org/docs/stable/generated/torch.polygamma.html
pytorch docs
torch.cuda.comm.broadcast_coalesced torch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760) Broadcasts a sequence tensors to the specified GPUs. Small tensors are first coalesced into a buffer to reduce the number of synchronizations. Parameters: * tensors (sequence) -- tensors to broadcast. Must be on the same device, either CPU or GPU. * **devices** (*Iterable**[**torch.device**, **str** or **int**]*) -- an iterable of GPU devices, among which to broadcast. * **buffer_size** (*int*) -- maximum size of the buffer used for coalescing Returns: A tuple containing copies of "tensor", placed on "devices".
https://pytorch.org/docs/stable/generated/torch.cuda.comm.broadcast_coalesced.html
pytorch docs
torch._foreach_abs torch._foreach_abs(self: List[Tensor]) -> List[Tensor] Apply "torch.abs()" to each Tensor of the input list.
https://pytorch.org/docs/stable/generated/torch._foreach_abs.html
pytorch docs
torch.neg torch.neg(input, *, out=None) -> Tensor Returns a new tensor with the negative of the elements of "input". \text{out} = -1 \times \text{input} Parameters: input (Tensor) -- the input tensor. Keyword Arguments: out (Tensor, optional) -- the output tensor. Example: >>> a = torch.randn(5) >>> a tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940]) >>> torch.neg(a) tensor([-0.0090, 0.2262, 0.0682, 0.2866, -0.3940])
https://pytorch.org/docs/stable/generated/torch.neg.html
pytorch docs
torch.Tensor.floor_ Tensor.floor_() -> Tensor In-place version of "floor()"
https://pytorch.org/docs/stable/generated/torch.Tensor.floor_.html
pytorch docs
torch.Tensor.heaviside Tensor.heaviside(values) -> Tensor See "torch.heaviside()"
https://pytorch.org/docs/stable/generated/torch.Tensor.heaviside.html
pytorch docs
torch.nn.functional.max_unpool2d torch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None) Computes a partial inverse of "MaxPool2d". See "MaxUnpool2d" for details. Return type: Tensor
https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool2d.html
pytorch docs
torch.Tensor.scatter_add_ Tensor.scatter_add_(dim, index, src) -> Tensor Adds all values from the tensor "src" into "self" at the indices specified in the "index" tensor in a similar fashion as "scatter_()". For each value in "src", it is added to an index in "self" which is specified by its index in "src" for "dimension != dim" and by the corresponding value in "index" for "dimension = dim". For a 3-D tensor, "self" is updated as: self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2 "self", "index" and "src" should have same number of dimensions. It is also required that "index.size(d) <= src.size(d)" for all dimensions "d", and that "index.size(d) <= self.size(d)" for all dimensions "d != dim". Note that "index" and "src" do not broadcast. Note:
https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html
pytorch docs
broadcast. Note: This operation may behave nondeterministically when given tensors on a CUDA device. See Reproducibility for more information. Note: The backward pass is implemented only for "src.shape == index.shape". Parameters: * dim (int) -- the axis along which to index * **index** (*LongTensor*) -- the indices of elements to scatter and add, can be either empty or of the same dimensionality as "src". When empty, the operation returns "self" unchanged. * **src** (*Tensor*) -- the source elements to scatter and add Example: >>> src = torch.ones((2, 5)) >>> index = torch.tensor([[0, 1, 2, 0, 0]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src) tensor([[1., 0., 0., 1., 1.], [0., 1., 0., 0., 0.], [0., 0., 1., 0., 0.]]) >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]]) >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)
https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html
pytorch docs
tensor([[2., 0., 0., 1., 1.], [0., 2., 0., 0., 0.], [0., 0., 2., 1., 1.]])
https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html
pytorch docs
torch.jit.optimize_for_inference torch.jit.optimize_for_inference(mod, other_methods=None) Performs a set of optimization passes to optimize a model for the purposes of inference. If the model is not already frozen, optimize_for_inference will invoke torch.jit.freeze automatically. In addition to generic optimizations that should speed up your model regardless of environment, prepare for inference will also bake in build specific settings such as the presence of CUDNN or MKLDNN, and may in the future make transformations which speed things up on one machine but slow things down on another. Accordingly, serialization is not implemented following invoking optimize_for_inference and is not guaranteed. This is still in prototype, and may have the potential to slow down your model. Primary use cases that have been targeted so far have been vision models on cpu and gpu to a lesser extent.
https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html
pytorch docs
Example (optimizing a module with Conv->Batchnorm): import torch in_channels, out_channels = 3, 32 conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=2, bias=True) bn = torch.nn.BatchNorm2d(out_channels, eps=.001) mod = torch.nn.Sequential(conv, bn) frozen_mod = torch.jit.optimize_for_inference(torch.jit.script(mod.eval())) assert "batch_norm" not in str(frozen_mod.graph) # if built with MKLDNN, convolution will be run with MKLDNN weights assert "MKLDNN" in frozen_mod.graph Return type: ScriptModule
https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html
pytorch docs
torch.Tensor.addcmul Tensor.addcmul(tensor1, tensor2, *, value=1) -> Tensor See "torch.addcmul()"
https://pytorch.org/docs/stable/generated/torch.Tensor.addcmul.html
pytorch docs
torch.cuda.is_current_stream_capturing torch.cuda.is_current_stream_capturing() Returns True if CUDA graph capture is underway on the current CUDA stream, False otherwise. If a CUDA context does not exist on the current device, returns False without initializing the context.
https://pytorch.org/docs/stable/generated/torch.cuda.is_current_stream_capturing.html
pytorch docs
torch.Tensor.amin Tensor.amin(dim=None, keepdim=False) -> Tensor See "torch.amin()"
https://pytorch.org/docs/stable/generated/torch.Tensor.amin.html
pytorch docs
torch.is_warn_always_enabled torch.is_warn_always_enabled() Returns True if the global warn_always flag is turned on. Refer to "torch.set_warn_always()" documentation for more details.
https://pytorch.org/docs/stable/generated/torch.is_warn_always_enabled.html
pytorch docs
torch.Tensor.repeat_interleave Tensor.repeat_interleave(repeats, dim=None, *, output_size=None) -> Tensor See "torch.repeat_interleave()".
https://pytorch.org/docs/stable/generated/torch.Tensor.repeat_interleave.html
pytorch docs
upsample class torch.ao.nn.quantized.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None) Upsamples the input to either the given "size" or the given "scale_factor" Warning: This function is deprecated in favor of "torch.nn.quantized.functional.interpolate()". This is equivalent with "nn.quantized.functional.interpolate(...)". See "torch.nn.functional.interpolate()" for implementation details. The input dimensions are interpreted in the form: mini-batch x channels x [optional depth] x [optional height] x width. Note: The input quantization parameters propagate to the output. Note: Only 2D input is supported for quantized inputs Note: Only the following modes are supported for the quantized inputs: * *bilinear* * *nearest* Parameters: * input (Tensor) -- quantized input tensor * **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html
pytorch docs
int] or Tuple[int, int, int]*) -- output spatial size. * **scale_factor** (*float** or **Tuple**[**float**]*) -- multiplier for spatial size. Has to be an integer. * **mode** (*str*) -- algorithm used for upsampling: "'nearest'" | "'bilinear'" * **align_corners** (*bool**, **optional*) -- Geometrically, we consider the pixels of the input and output as squares rather than points. If set to "True", the input and output tensors are aligned by the center points of their corner pixels, preserving the values at the corner pixels. If set to "False", the input and output tensors are aligned by the corner points of their corner pixels, and the interpolation uses edge value padding for out-of-boundary values, making this operation *independent* of input size when "scale_factor" is kept the same. This only has an effect when "mode" is "'bilinear'". Default: "False"
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html
pytorch docs
Default: "False" Warning: With "align_corners = True", the linearly interpolating modes (*bilinear*) don't proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is "align_corners = False". See "Upsample" for concrete examples on how this affects the outputs.
https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html
pytorch docs