text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
| "Tensor.bitwise_not | Keeps input names |
| ()", "torch.bitwise | |
| not()" | |
+----------------------+----------------------+
| "Tensor.bitwise_not | None |
| ()" | |
+----------------------+----------------------+
| "Tensor.bmm()", | Contracts away dims |
| "torch.bmm()" | |
+----------------------+----------------------+
| "Tensor.bool()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.byte()" | Keeps input names |
+----------------------+----------------------+
| "torch.cat()" | Unifies names from |
| | inputs |
+----------------------+----------------------+
| "Tensor.cauchy_()" | None |
+----------------------+----------------------+
| "Tensor.ceil()", | Keeps input names |
| "torch.ceil()" | | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.ceil()" | |
+----------------------+----------------------+
| "Tensor.ceil_()" | None |
+----------------------+----------------------+
| "Tensor.char()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.chunk()", | Keeps input names |
| "torch.chunk()" | |
+----------------------+----------------------+
| "Tensor.clamp()", | Keeps input names |
| "torch.clamp()" | |
+----------------------+----------------------+
| "Tensor.clamp_()" | None |
+----------------------+----------------------+
| "Tensor.copy_()" | out function and in- |
| | place variants |
+----------------------+----------------------+
| "Tensor.cos()", | Keeps input names |
| "torch.cos()" | |
+----------------------+----------------------+
| "Tensor.cos_()" | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.cos_()" | None |
+----------------------+----------------------+
| "Tensor.cosh()", | Keeps input names |
| "torch.cosh()" | |
+----------------------+----------------------+
| "Tensor.cosh_()" | None |
+----------------------+----------------------+
| "Tensor.acosh()", | Keeps input names |
| "torch.acosh()" | |
+----------------------+----------------------+
| "Tensor.acosh_()" | None |
+----------------------+----------------------+
| "Tensor.cpu()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.cuda()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.cumprod()", | Keeps input names |
| "torch.cumprod()" | |
+----------------------+----------------------+
| "Tensor.cumsum()", | Keeps input names |
| "torch.cumsum()" | | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.cumsum()" | |
+----------------------+----------------------+
| "Tensor.data_ptr()" | None |
+----------------------+----------------------+
| "Tensor.deg2rad()", | Keeps input names |
| "torch.deg2rad()" | |
+----------------------+----------------------+
| "Tensor.deg2rad_()" | None |
+----------------------+----------------------+
| "Tensor.detach()", | Keeps input names |
| "torch.detach()" | |
+----------------------+----------------------+
| "Tensor.detach_()" | None |
+----------------------+----------------------+
| "Tensor.device", | None |
| "torch.device()" | |
+----------------------+----------------------+
| "Tensor.digamma()", | Keeps input names |
| "torch.digamma()" | |
+----------------------+----------------------+
| "Tensor.digamma_()" | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.digamma_()" | None |
+----------------------+----------------------+
| "Tensor.dim()" | None |
+----------------------+----------------------+
| "Tensor.div()", | Unifies names from |
| "torch.div()" | inputs |
+----------------------+----------------------+
| "Tensor.div_()" | Unifies names from |
| | inputs |
+----------------------+----------------------+
| "Tensor.dot()", | None |
| "torch.dot()" | |
+----------------------+----------------------+
| "Tensor.double()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.element_siz | None |
| e()" | |
+----------------------+----------------------+
| "torch.empty()" | Factory functions |
+----------------------+----------------------+
| "torch.empty_like()" | Factory functions | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.empty_like()" | Factory functions |
+----------------------+----------------------+
| "Tensor.eq()", | Unifies names from |
| "torch.eq()" | inputs |
+----------------------+----------------------+
| "Tensor.erf()", | Keeps input names |
| "torch.erf()" | |
+----------------------+----------------------+
| "Tensor.erf_()" | None |
+----------------------+----------------------+
| "Tensor.erfc()", | Keeps input names |
| "torch.erfc()" | |
+----------------------+----------------------+
| "Tensor.erfc_()" | None |
+----------------------+----------------------+
| "Tensor.erfinv()", | Keeps input names |
| "torch.erfinv()" | |
+----------------------+----------------------+
| "Tensor.erfinv_()" | None |
+----------------------+----------------------+
| "Tensor.exp()", | Keeps input names | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.exp()", | Keeps input names |
| "torch.exp()" | |
+----------------------+----------------------+
| "Tensor.exp_()" | None |
+----------------------+----------------------+
| "Tensor.expand()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.expm1()", | Keeps input names |
| "torch.expm1()" | |
+----------------------+----------------------+
| "Tensor.expm1_()" | None |
+----------------------+----------------------+
| "Tensor.exponential | None |
| ()" | |
+----------------------+----------------------+
| "Tensor.fill()" | None |
+----------------------+----------------------+
| "Tensor.flatten()", | See documentation |
| "torch.flatten()" | |
+----------------------+----------------------+
| "Tensor.float()" | Keeps input names | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.float()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.floor()", | Keeps input names |
| "torch.floor()" | |
+----------------------+----------------------+
| "Tensor.floor_()" | None |
+----------------------+----------------------+
| "Tensor.frac()", | Keeps input names |
| "torch.frac()" | |
+----------------------+----------------------+
| "Tensor.frac_()" | None |
+----------------------+----------------------+
| "Tensor.ge()", | Unifies names from |
| "torch.ge()" | inputs |
+----------------------+----------------------+
| "Tensor.get_device( | None |
| )", | |
| "torch.get_device()" | |
+----------------------+----------------------+
| "Tensor.grad" | None |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "Tensor.gt()", | Unifies names from |
| "torch.gt()" | inputs |
+----------------------+----------------------+
| "Tensor.half()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.has_names()" | See documentation |
+----------------------+----------------------+
| "Tensor.index_fill( | Keeps input names |
| )", | |
| "torch.index_fill()" | |
+----------------------+----------------------+
| "Tensor.index_fill_ | None |
| ()" | |
+----------------------+----------------------+
| "Tensor.int()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.is_contiguo | None |
| us()" | |
+----------------------+----------------------+
| "Tensor.is_cuda" | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.is_cuda" | None |
+----------------------+----------------------+
| "Tensor.is_floating | None |
| _point()", "torch.i | |
| s_floating_point()" | |
+----------------------+----------------------+
| "Tensor.is_leaf" | None |
+----------------------+----------------------+
| "Tensor.is_pinned()" | None |
+----------------------+----------------------+
| "Tensor.is_shared()" | None |
+----------------------+----------------------+
| "Tensor.is_signed() | None |
| ", | |
| "torch.is_signed()" | |
+----------------------+----------------------+
| "Tensor.is_sparse" | None |
+----------------------+----------------------+
| "Tensor.is_sparse_c | None |
| sr" | |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "torch.is_tensor()" | None |
+----------------------+----------------------+
| "Tensor.item()" | None |
+----------------------+----------------------+
| "Tensor.kthvalue()", | Removes dimensions |
| "torch.kthvalue()" | |
+----------------------+----------------------+
| "Tensor.le()", | Unifies names from |
| "torch.le()" | inputs |
+----------------------+----------------------+
| "Tensor.log()", | Keeps input names |
| "torch.log()" | |
+----------------------+----------------------+
| "Tensor.log10()", | Keeps input names |
| "torch.log10()" | |
+----------------------+----------------------+
| "Tensor.log10_()" | None |
+----------------------+----------------------+
| "Tensor.log1p()", | Keeps input names |
| "torch.log1p()" | | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.log1p()" | |
+----------------------+----------------------+
| "Tensor.log1p_()" | None |
+----------------------+----------------------+
| "Tensor.log2()", | Keeps input names |
| "torch.log2()" | |
+----------------------+----------------------+
| "Tensor.log2_()" | None |
+----------------------+----------------------+
| "Tensor.log_()" | None |
+----------------------+----------------------+
| "Tensor.log_normal_ | None |
| ()" | |
+----------------------+----------------------+
| "Tensor.logical_not | Keeps input names |
| ()", "torch.logical | |
| not()" | |
+----------------------+----------------------+
| "Tensor.logical_not | None |
| ()" | |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "Tensor.logsumexp() | Removes dimensions |
| ", | |
| "torch.logsumexp()" | |
+----------------------+----------------------+
| "Tensor.long()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.lt()", | Unifies names from |
| "torch.lt()" | inputs |
+----------------------+----------------------+
| "torch.manual_seed( | None |
| )" | |
+----------------------+----------------------+
| "Tensor.masked_fill | Keeps input names |
| ()", "torch.masked_ | |
| fill()" | |
+----------------------+----------------------+
| "Tensor.masked_fill | None |
| _()" | |
+----------------------+----------------------+
| "Tensor.masked_sele | Aligns mask up to | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.masked_sele | Aligns mask up to |
| ct()", "torch.maske | input and then unif |
| d_select()" | ies_names_from_inpu |
| | t_tensors |
+----------------------+----------------------+
| "Tensor.matmul()", | Contracts away dims |
| "torch.matmul()" | |
+----------------------+----------------------+
| "Tensor.mean()", | Removes dimensions |
| "torch.mean()" | |
+----------------------+----------------------+
| "Tensor.median()", | Removes dimensions |
| "torch.median()" | |
+----------------------+----------------------+
| "Tensor.nanmedian() | Removes dimensions |
| ", | |
| "torch.nanmedian()" | |
+----------------------+----------------------+
| "Tensor.mm()", | Contracts away dims |
| "torch.mm()" | |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "Tensor.mode()", | Removes dimensions |
| "torch.mode()" | |
+----------------------+----------------------+
| "Tensor.mul()", | Unifies names from |
| "torch.mul()" | inputs |
+----------------------+----------------------+
| "Tensor.mul_()" | Unifies names from |
| | inputs |
+----------------------+----------------------+
| "Tensor.mv()", | Contracts away dims |
| "torch.mv()" | |
+----------------------+----------------------+
| "Tensor.names" | See documentation |
+----------------------+----------------------+
| "Tensor.narrow()", | Keeps input names |
| "torch.narrow()" | |
+----------------------+----------------------+
| "Tensor.ndim" | None |
+----------------------+----------------------+
| "Tensor.ndimension( | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.ndimension( | None |
| )" | |
+----------------------+----------------------+
| "Tensor.ne()", | Unifies names from |
| "torch.ne()" | inputs |
+----------------------+----------------------+
| "Tensor.neg()", | Keeps input names |
| "torch.neg()" | |
+----------------------+----------------------+
| "Tensor.neg_()" | None |
+----------------------+----------------------+
| "torch.normal()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.normal_()" | None |
+----------------------+----------------------+
| "Tensor.numel()", | None |
| "torch.numel()" | |
+----------------------+----------------------+
| "torch.ones()" | Factory functions |
+----------------------+----------------------+
| "Tensor.pow()", | Unifies names from | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.pow()", | Unifies names from |
| "torch.pow()" | inputs |
+----------------------+----------------------+
| "Tensor.pow_()" | None |
+----------------------+----------------------+
| "Tensor.prod()", | Removes dimensions |
| "torch.prod()" | |
+----------------------+----------------------+
| "Tensor.rad2deg()", | Keeps input names |
| "torch.rad2deg()" | |
+----------------------+----------------------+
| "Tensor.rad2deg_()" | None |
+----------------------+----------------------+
| "torch.rand()" | Factory functions |
+----------------------+----------------------+
| "torch.rand()" | Factory functions |
+----------------------+----------------------+
| "torch.randn()" | Factory functions |
+----------------------+----------------------+
| "torch.randn()" | Factory functions |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "Tensor.random_()" | None |
+----------------------+----------------------+
| "Tensor.reciprocal( | Keeps input names |
| )", | |
| "torch.reciprocal()" | |
+----------------------+----------------------+
| "Tensor.reciprocal_ | None |
| ()" | |
+----------------------+----------------------+
| "Tensor.refine_name | See documentation |
| s()" | |
+----------------------+----------------------+
| "Tensor.register_ho | None |
| ok()" | |
+----------------------+----------------------+
| "Tensor.rename()" | See documentation |
+----------------------+----------------------+
| "Tensor.rename_()" | See documentation |
+----------------------+----------------------+
| "Tensor.requires_gr | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.requires_gr | None |
| ad" | |
+----------------------+----------------------+
| "Tensor.requires_gr | None |
| ad_()" | |
+----------------------+----------------------+
| "Tensor.resize_()" | Only allow resizes |
| | that do not change |
| | shape |
+----------------------+----------------------+
| "Tensor.resize_as_( | Only allow resizes |
| )" | that do not change |
| | shape |
+----------------------+----------------------+
| "Tensor.round()", | Keeps input names |
| "torch.round()" | |
+----------------------+----------------------+
| "Tensor.round_()" | None |
+----------------------+----------------------+
| "Tensor.rsqrt()", | Keeps input names |
| "torch.rsqrt()" | | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.rsqrt()" | |
+----------------------+----------------------+
| "Tensor.rsqrt_()" | None |
+----------------------+----------------------+
| "Tensor.select()", | Removes dimensions |
| "torch.select()" | |
+----------------------+----------------------+
| "Tensor.short()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.sigmoid()", | Keeps input names |
| "torch.sigmoid()" | |
+----------------------+----------------------+
| "Tensor.sigmoid_()" | None |
+----------------------+----------------------+
| "Tensor.sign()", | Keeps input names |
| "torch.sign()" | |
+----------------------+----------------------+
| "Tensor.sign_()" | None |
+----------------------+----------------------+
| "Tensor.sgn()", | Keeps input names |
| "torch.sgn()" | | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.sgn()" | |
+----------------------+----------------------+
| "Tensor.sgn_()" | None |
+----------------------+----------------------+
| "Tensor.sin()", | Keeps input names |
| "torch.sin()" | |
+----------------------+----------------------+
| "Tensor.sin_()" | None |
+----------------------+----------------------+
| "Tensor.sinh()", | Keeps input names |
| "torch.sinh()" | |
+----------------------+----------------------+
| "Tensor.sinh_()" | None |
+----------------------+----------------------+
| "Tensor.asinh()", | Keeps input names |
| "torch.asinh()" | |
+----------------------+----------------------+
| "Tensor.asinh_()" | None |
+----------------------+----------------------+
| "Tensor.size()" | None |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "Tensor.softmax()", | Keeps input names |
| "torch.softmax()" | |
+----------------------+----------------------+
| "Tensor.split()", | Keeps input names |
| "torch.split()" | |
+----------------------+----------------------+
| "Tensor.sqrt()", | Keeps input names |
| "torch.sqrt()" | |
+----------------------+----------------------+
| "Tensor.sqrt_()" | None |
+----------------------+----------------------+
| "Tensor.squeeze()", | Removes dimensions |
| "torch.squeeze()" | |
+----------------------+----------------------+
| "Tensor.std()", | Removes dimensions |
| "torch.std()" | |
+----------------------+----------------------+
| "torch.std_mean()" | Removes dimensions |
+----------------------+----------------------+
| "Tensor.stride()" | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.stride()" | None |
+----------------------+----------------------+
| "Tensor.sub()", | Unifies names from |
| "torch.sub()" | inputs |
+----------------------+----------------------+
| "Tensor.sub_()" | Unifies names from |
| | inputs |
+----------------------+----------------------+
| "Tensor.sum()", | Removes dimensions |
| "torch.sum()" | |
+----------------------+----------------------+
| "Tensor.tan()", | Keeps input names |
| "torch.tan()" | |
+----------------------+----------------------+
| "Tensor.tan_()" | None |
+----------------------+----------------------+
| "Tensor.tanh()", | Keeps input names |
| "torch.tanh()" | |
+----------------------+----------------------+
| "Tensor.tanh_()" | None |
+----------------------+----------------------+ | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
+----------------------+----------------------+
| "Tensor.atanh()", | Keeps input names |
| "torch.atanh()" | |
+----------------------+----------------------+
| "Tensor.atanh_()" | None |
+----------------------+----------------------+
| "torch.tensor()" | Factory functions |
+----------------------+----------------------+
| "Tensor.to()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.topk()", | Removes dimensions |
| "torch.topk()" | |
+----------------------+----------------------+
| "Tensor.transpose() | Permutes dimensions |
| ", | |
| "torch.transpose()" | |
+----------------------+----------------------+
| "Tensor.trunc()", | Keeps input names |
| "torch.trunc()" | |
+----------------------+----------------------+
| "Tensor.trunc_()" | None | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "Tensor.trunc_()" | None |
+----------------------+----------------------+
| "Tensor.type()" | None |
+----------------------+----------------------+
| "Tensor.type_as()" | Keeps input names |
+----------------------+----------------------+
| "Tensor.unbind()", | Removes dimensions |
| "torch.unbind()" | |
+----------------------+----------------------+
| "Tensor.unflatten()" | See documentation |
+----------------------+----------------------+
| "Tensor.uniform_()" | None |
+----------------------+----------------------+
| "Tensor.var()", | Removes dimensions |
| "torch.var()" | |
+----------------------+----------------------+
| "torch.var_mean()" | Removes dimensions |
+----------------------+----------------------+
| "Tensor.zero_()" | None |
+----------------------+----------------------+
| "torch.zeros()" | Factory functions | | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
| "torch.zeros()" | Factory functions |
+----------------------+----------------------+
Keeps input names
All pointwise unary functions follow this rule as well as some other
unary functions.
Check names: None
Propagate names: input tensor's names are propagated to the output.
x = torch.randn(3, 3, names=('N', 'C'))
x.abs().names
('N', 'C')
Removes dimensions
All reduction ops like "sum()" remove dimensions by reducing over the
desired dimensions. Other operations like "select()" and "squeeze()"
remove dimensions.
Wherever one can pass an integer dimension index to an operator, one
can also pass a dimension name. Functions that take lists of dimension
indices can also take in a list of dimension names.
Check names: If "dim" or "dims" is passed in as a list of names,
check that those names exist in "self".
Propagate names: If the dimensions of the input tensor specified by
| https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
"dim" or "dims" are not present in the output tensor, then the
corresponding names of those dimensions do not appear in
"output.names".
x = torch.randn(1, 3, 3, 3, names=('N', 'C', 'H', 'W'))
x.squeeze('N').names
('C', 'H', 'W')
x = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
x.sum(['N', 'C']).names
('H', 'W')
# Reduction ops with keepdim=True don't actually remove dimensions.
x = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))
x.sum(['N', 'C'], keepdim=True).names
('N', 'C', 'H', 'W')
Unifies names from inputs
All binary arithmetic ops follow this rule. Operations that broadcast
still broadcast positionally from the right to preserve compatibility
with unnamed tensors. To perform explicit broadcasting by names, use
"Tensor.align_as()".
Check names: All names must match positionally from the right. i.e.,
in "tensor + other", "match(tensor.names[i], other.names[i])" must
| https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
be true for all "i" in "(-min(tensor.dim(), other.dim()) + 1, -1]".
Check names: Furthermore, all named dimensions must be aligned from
the right. During matching, if we match a named dimension "A" with
an unnamed dimension "None", then "A" must not appear in the tensor
with the unnamed dimension.
Propagate names: unify pairs of names from the right from both
tensors to produce output names.
For example,
# tensor: Tensor[ N, None]
# other: Tensor[None, C]
tensor = torch.randn(3, 3, names=('N', None))
other = torch.randn(3, 3, names=(None, 'C'))
(tensor + other).names
('N', 'C')
Check names:
"match(tensor.names[-1], other.names[-1])" is "True"
"match(tensor.names[-2], tensor.names[-2])" is "True"
Because we matched "None" in "tensor" with "'C'", check to make sure
"'C'" doesn't exist in "tensor" (it does not).
Check to make sure "'N'" doesn't exists in "other" (it does not).
Finally, the output names are computed with "[unify('N', None), | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
unify(None, 'C')] = ['N', 'C']"
More examples:
# Dimensions don't match from the right:
# tensor: Tensor[N, C]
# other: Tensor[ N]
tensor = torch.randn(3, 3, names=('N', 'C'))
other = torch.randn(3, names=('N',))
(tensor + other).names
RuntimeError: Error when attempting to broadcast dims ['N', 'C'] and dims
['N']: dim 'C' and dim 'N' are at the same position from the right but do
not match.
# Dimensions aren't aligned when matching tensor.names[-1] and other.names[-1]:
# tensor: Tensor[N, None]
# other: Tensor[ N]
tensor = torch.randn(3, 3, names=('N', None))
other = torch.randn(3, names=('N',))
(tensor + other).names
RuntimeError: Misaligned dims when attempting to broadcast dims ['N'] and
dims ['N', None]: dim 'N' appears in a different position from the right
across both lists.
Note:
In both of the last examples, it is possible to align the tensors by | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
names and then perform the addition. Use "Tensor.align_as()" to
align tensors by name or "Tensor.align_to()" to align tensors to a
custom dimension ordering.
Permutes dimensions
Some operations, like "Tensor.t()", permute the order of dimensions.
Dimension names are attached to individual dimensions so they get
permuted as well.
If the operator takes in positional index "dim", it is also able to
take a dimension name as "dim".
Check names: If "dim" is passed as a name, check that it exists in
the tensor.
Propagate names: Permute dimension names in the same way as the
dimensions that are being permuted.
x = torch.randn(3, 3, names=('N', 'C'))
x.transpose('N', 'C').names
('C', 'N')
Contracts away dims
Matrix multiply functions follow some variant of this. Let's go
through "torch.mm()" first and then generalize the rule for batch
matrix multiplication.
For "torch.mm(tensor, other)":
Check names: None
| https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
Check names: None
Propagate names: result names are "(tensor.names[-2],
other.names[-1])".
x = torch.randn(3, 3, names=('N', 'D'))
y = torch.randn(3, 3, names=('in', 'out'))
x.mm(y).names
('N', 'out')
Inherently, a matrix multiplication performs a dot product over two
dimensions, collapsing them. When two tensors are matrix-multiplied,
the contracted dimensions disappear and do not show up in the output
tensor.
"torch.mv()", "torch.dot()" work in a similar way: name inference does
not check input names and removes the dimensions that are involved in
the dot product:
x = torch.randn(3, 3, names=('N', 'D'))
y = torch.randn(3, names=('something',))
x.mv(y).names
('N',)
Now, let's take a look at "torch.matmul(tensor, other)". Assume that
"tensor.dim() >= 2" and "other.dim() >= 2".
Check names: Check that the batch dimensions of the inputs are
aligned and broadcastable. See Unifies names from inputs for what it
means for the inputs to be aligned.
| https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
means for the inputs to be aligned.
Propagate names: result names are obtained by unifying the batch
dimensions and removing the contracted dimensions:
"unify(tensor.names[:-2], other.names[:-2]) + (tensor.names[-2],
other.names[-1])".
Examples:
# Batch matrix multiply of matrices Tensor['C', 'D'] and Tensor['E', 'F'].
# 'A', 'B' are batch dimensions.
x = torch.randn(3, 3, 3, 3, names=('A', 'B', 'C', 'D'))
y = torch.randn(3, 3, 3, names=('B', 'E', 'F'))
torch.matmul(x, y).names
('A', 'B', 'C', 'F')
Finally, there are fused "add" versions of many matmul functions.
i.e., "addmm()" and "addmv()". These are treated as composing name
inference for i.e. "mm()" and name inference for "add()".
Factory functions
Factory functions now take a new "names" argument that associates a
name with each dimension.
torch.zeros(2, 3, names=('N', 'C'))
tensor([[0., 0., 0.],
[0., 0., 0.]], names=('N', 'C'))
out function and in-place variants | https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
out function and in-place variants
A tensor specified as an "out=" tensor has the following behavior:
If it has no named dimensions, then the names computed from the
operation get propagated to it.
If it has any named dimensions, then the names computed from the
operation must be exactly equal to the existing names. Otherwise,
the operation errors.
All in-place methods modify inputs to have names equal to the computed
names from name inference. For example:
x = torch.randn(3, 3)
y = torch.randn(3, 3, names=('N', 'C'))
x.names
(None, None)
x += y
x.names
('N', 'C')
| https://pytorch.org/docs/stable/name_inference.html | pytorch docs |
Tensor Parallelism
| https://pytorch.org/docs/stable/distributed.tensor.parallel.html | pytorch docs |
torch.library
Python operator registration API provides capabilities for extending
PyTorch's core library of operators with user defined operators.
Currently, this can be done in two ways:
Creating new libraries
Lets you to register new operators and kernels for various
backends and functionalities by specifying the appropriate
dispatch keys. For example,
* Consider registering a new operator "add" in your newly
created namespace "foo". You can access this operator using
the "torch.ops" API and calling into by calling
"torch.ops.foo.add". You can also access specific registered
overloads by calling "torch.ops.foo.add.{overload_name}".
* If you registered a new kernel for the "CUDA" dispatch key
for this operator, then your custom defined function will be
called for CUDA tensor inputs.
This can be done by creating Library class objects of ""DEF""
kind.
| https://pytorch.org/docs/stable/library.html | pytorch docs |
kind.
Extending existing C++ libraries (e.g., aten)
Lets you register kernels for existing operators
corresponding to various backends and functionalities by
specifying the appropriate dispatch keys.
This may come in handy to fill up spotty operator support for a
feature implemented through a dispatch key. For example.,
* You can add operator support for Meta Tensors (by
registering function to the "Meta" dispatch key).
This can be done by creating Library class objects of ""IMPL""
kind.
A tutorial that walks you through some examples on how to use this API
is available on Google Colab.
Warning:
Dispatcher is a complicated PyTorch concept and having a sound
understanding of Dispatcher is crucial to be able to do anything
advanced with this API. This blog post is a good starting point to
learn about Dispatcher.
class torch.library.Library(ns, kind, dispatch_key='')
A class to create libraries that can be used to register new | https://pytorch.org/docs/stable/library.html | pytorch docs |
operators or override operators in existing libraries from Python.
A user can optionally pass in a dispatch keyname if they only want
to register kernels corresponding to only one specific dispatch
key.
To create a library to override operators in an existing library
(with name ns), set the kind to "IMPL". To create a new library
(with name ns) to register new operators, set the kind to "DEF".
:param ns: library name :param kind: "DEF", "IMPL" (default:
"IMPL") :param dispatch_key: PyTorch dispatch key (default: "")
define(schema, alias_analysis='')
Defines a new operator and its semantics in the ns namespace.
Parameters:
* **schema** -- function schema to define a new operator.
* **alias_analysis** (*optional*) -- Indicates if the
aliasing properties of the operator arguments can be
inferred from the schema (default behavior) or not
("CONSERVATIVE").
Returns:
| https://pytorch.org/docs/stable/library.html | pytorch docs |
("CONSERVATIVE").
Returns:
name of the operator as inferred from the schema.
Example::
>>> my_lib = Library("foo", "DEF")
>>> my_lib.define("sum(Tensor self) -> Tensor")
impl(op_name, fn, dispatch_key='')
Registers the function implementation for an operator defined in
the library.
Parameters:
* **op_name** -- operator name (along with the overload) or
OpOverload object.
* **fn** -- function that's the operator implementation for
the input dispatch key.
* **dispatch_key** -- dispatch key that the input function
should be registered for. By default, it uses the dispatch
key that the library was created with.
Example::
>>> my_lib = Library("aten", "IMPL")
>>> def div_cpu(self, other):
>>> return self * (1 / other)
>>> my_lib.impl("div.Tensor", "CPU")
We have also added some function decorators to make it convenient to | https://pytorch.org/docs/stable/library.html | pytorch docs |
register functions for operators:
"torch.library.impl()"
"torch.library.define()"
| https://pytorch.org/docs/stable/library.html | pytorch docs |
Named Tensors
Named Tensors allow users to give explicit names to tensor dimensions.
In most cases, operations that take dimension parameters will accept
dimension names, avoiding the need to track dimensions by position. In
addition, named tensors use names to automatically check that APIs are
being used correctly at runtime, providing extra safety. Names can
also be used to rearrange dimensions, for example, to support
"broadcasting by name" rather than "broadcasting by position".
Warning:
The named tensor API is a prototype feature and subject to change.
Creating named tensors
Factory functions now take a new "names" argument that associates a
name with each dimension.
torch.zeros(2, 3, names=('N', 'C'))
tensor([[0., 0., 0.],
[0., 0., 0.]], names=('N', 'C'))
Named dimensions, like regular Tensor dimensions, are ordered.
"tensor.names[i]" is the name of dimension "i" of "tensor".
The following factory functions support named tensors: | https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
"torch.empty()"
"torch.rand()"
"torch.randn()"
"torch.ones()"
"torch.tensor()"
"torch.zeros()"
Named dimensions
See "names" for restrictions on tensor names.
Use "names" to access the dimension names of a tensor and "rename()"
to rename named dimensions.
imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
imgs.names
('N', 'C', 'H', 'W')
renamed_imgs = imgs.rename(H='height', W='width')
renamed_imgs.names
('N', 'C', 'height', 'width)
Named tensors can coexist with unnamed tensors; named tensors are
instances of "torch.Tensor". Unnamed tensors have "None"-named
dimensions. Named tensors do not require all dimensions to be named.
imgs = torch.randn(1, 2, 2, 3 , names=(None, 'C', 'H', 'W'))
imgs.names
(None, 'C', 'H', 'W')
Name propagation semantics
Named tensors use names to automatically check that APIs are being
called correctly at runtime. This occurs in a process called *name | https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
inference*. More formally, name inference consists of the following
two steps:
Check names: an operator may perform automatic checks at runtime
that check that certain dimension names must match.
Propagate names: name inference propagates names to output
tensors.
All operations that support named tensors propagate names.
x = torch.randn(3, 3, names=('N', 'C'))
x.abs().names
('N', 'C')
match semantics
Two names match if they are equal (string equality) or if at least
one is "None". Nones are essentially a special "wildcard" name.
"unify(A, B)" determines which of the names "A" and "B" to propagate
to the outputs. It returns the more specific of the two names, if
they match. If the names do not match, then it errors.
Note:
In practice, when working with named tensors, one should avoid
having unnamed dimensions because their handling can be complicated.
It is recommended to lift all unnamed dimensions to be named | https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
dimensions by using "refine_names()".
Basic name inference rules
Let's see how "match" and "unify" are used in name inference in the
case of adding two one-dim tensors with no broadcasting.
x = torch.randn(3, names=('X',))
y = torch.randn(3)
z = torch.randn(3, names=('Z',))
Check names: check that the names of the two tensors match.
For the following examples:
x + y # match('X', None) is True
x + z # match('X', 'Z') is False
x + x # match('X', 'X') is True
x + z
Error when attempting to broadcast dims ['X'] and dims ['Z']: dim 'X' and dim 'Z' are at the same position from the right but do not match.
Propagate names: unify the names to select which one to
propagate. In the case of "x + y", "unify('X', None) = 'X'" because
"'X'" is more specific than "None".
(x + y).names
('X',)
(x + x).names
('X',)
For a comprehensive list of name inference rules, see Named Tensors | https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
operator coverage. Here are two common operations that may be useful
to go over:
Binary arithmetic ops: Unifies names from inputs
Matrix multiplication ops: Contracts away dims
Explicit alignment by names
Use "align_as()" or "align_to()" to align tensor dimensions by name to
a specified ordering. This is useful for performing "broadcasting by
names".
# This function is agnostic to the dimension ordering of input,
# as long as it has a C dimension somewhere.
def scale_channels(input, scale):
scale = scale.refine_names('C')
return input * scale.align_as(input)
num_channels = 3
scale = torch.randn(num_channels, names=('C',))
imgs = torch.rand(3, 3, 3, num_channels, names=('N', 'H', 'W', 'C'))
more_imgs = torch.rand(3, num_channels, 3, 3, names=('N', 'C', 'H', 'W'))
videos = torch.randn(3, num_channels, 3, 3, 3, names=('N', 'C', 'H', 'W', 'D')
scale_channels(imgs, scale)
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
scale_channels(imgs, scale)
scale_channels(more_imgs, scale)
scale_channels(videos, scale)
Manipulating dimensions
Use "align_to()" to permute large amounts of dimensions without
mentioning all of them as in required by "permute()".
tensor = torch.randn(2, 2, 2, 2, 2, 2)
named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')
# Move the F (dim 5) and E dimension (dim 4) to the front while keeping
# the rest in the same order
tensor.permute(5, 4, 0, 1, 2, 3)
named_tensor.align_to('F', 'E', ...)
Use "flatten()" and "unflatten()" to flatten and unflatten dimensions,
respectively. These methods are more verbose than "view()" and
"reshape()", but have more semantic meaning to someone reading the
code.
imgs = torch.randn(32, 3, 128, 128)
named_imgs = imgs.refine_names('N', 'C', 'H', 'W')
flat_imgs = imgs.view(32, -1)
named_flat_imgs = named_imgs.flatten(['C', 'H', 'W'], 'features')
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
named_flat_imgs.names
('N', 'features')
unflattened_imgs = imgs.view(32, 3, 128, 128)
unflattened_named_imgs = named_flat_imgs.unflatten(
'features', [('C', 3), ('H', 128), ('W', 128)])
Autograd support
Autograd currently supports named tensors in a limited manner:
autograd ignores names on all tensors. Gradient computation is still
correct but we lose the safety that names give us.
x = torch.randn(3, names=('D',))
weight = torch.randn(3, names=('D',), requires_grad=True)
loss = (x - weight).abs()
grad_loss = torch.randn(3)
loss.backward(grad_loss)
weight.grad # Unnamed for now. Will be named in the future
tensor([-1.8107, -0.6357, 0.0783])
weight.grad.zero_()
grad_loss = grad_loss.refine_names('C')
loss = (x - weight).abs()
# Ideally we'd check that the names of loss and grad_loss match but we don't yet.
loss.backward(grad_loss)
weight.grad
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
loss.backward(grad_loss)
weight.grad
tensor([-1.8107, -0.6357, 0.0783])
Currently supported operations and subsystems
Operators
See Named Tensors operator coverage for a full list of the supported
torch and tensor operations. We do not yet support the following that
is not covered by the link:
indexing, advanced indexing.
For "torch.nn.functional" operators, we support the following:
"torch.nn.functional.relu()"
"torch.nn.functional.softmax()"
"torch.nn.functional.log_softmax()"
"torch.nn.functional.tanh()"
"torch.nn.functional.sigmoid()"
"torch.nn.functional.dropout()"
Subsystems
Autograd is supported, see Autograd support. Because gradients are
currently unnamed, optimizers may work but are untested.
NN modules are currently unsupported. This can lead to the following
when calling modules with named tensor inputs:
NN module parameters are unnamed, so outputs may be partially named.
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
NN module forward passes have code that don't support named tensors
and will error out appropriately.
We also do not support the following subsystems, though some may work
out of the box:
distributions
serialization ("torch.load()", "torch.save()")
multiprocessing
JIT
distributed
ONNX
If any of these would help your use case, please search if an issue
has already been filed and if not, file one.
Named tensor API reference
In this section please find the documentation for named tensor
specific APIs. For a comprehensive reference for how names are
propagated through other PyTorch operators, see Named Tensors operator
coverage.
class torch.Tensor
names
Stores names for each of this tensor's dimensions.
"names[idx]" corresponds to the name of tensor dimension "idx".
Names are either a string if the dimension is named or "None" if
the dimension is unnamed.
Dimension names may contain characters or underscore.
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
Furthermore, a dimension name must be a valid Python variable
name (i.e., does not start with underscore).
Tensors may not have two named dimensions with the same name.
Warning:
The named tensor API is experimental and subject to change.
rename(names, *rename_map)
Renames dimension names of "self".
There are two main usages:
"self.rename(**rename_map)" returns a view on tensor that has
dims renamed as specified in the mapping "rename_map".
"self.rename(*names)" returns a view on tensor, renaming all
dimensions positionally using "names". Use "self.rename(None)"
to drop names on a tensor.
One cannot specify both positional args "names" and keyword args
"rename_map".
Examples:
>>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))
>>> renamed_imgs = imgs.rename(N='batch', C='channels')
>>> renamed_imgs.names
('batch', 'channels', 'H', 'W')
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
('batch', 'channels', 'H', 'W')
>>> renamed_imgs = imgs.rename(None)
>>> renamed_imgs.names
(None, None, None, None)
>>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width')
>>> renamed_imgs.names
('batch', 'channel', 'height', 'width')
Warning:
The named tensor API is experimental and subject to change.
rename_(names, *rename_map)
In-place version of "rename()".
refine_names(*names)
Refines the dimension names of "self" according to "names".
Refining is a special case of renaming that "lifts" unnamed
dimensions. A "None" dim can be refined to have any name; a
named dim can only be refined to have the same name.
Because named tensors can coexist with unnamed tensors, refining
names gives a nice way to write named-tensor-aware code that
works with both named and unnamed tensors.
"names" may contain up to one Ellipsis ("..."). The Ellipsis is
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
expanded greedily; it is expanded in-place to fill "names" to
the same length as "self.dim()" using names from the
corresponding indices of "self.names".
Python 2 does not support Ellipsis but one may use a string
literal instead ("'...'").
Parameters:
**names** (*iterable of str*) -- The desired names of the
output tensor. May contain up to one Ellipsis.
Examples:
>>> imgs = torch.randn(32, 3, 128, 128)
>>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W')
>>> named_imgs.names
('N', 'C', 'H', 'W')
>>> tensor = torch.randn(2, 3, 5, 7, 11)
>>> tensor = tensor.refine_names('A', ..., 'B', 'C')
>>> tensor.names
('A', None, None, 'B', 'C')
Warning:
The named tensor API is experimental and subject to change.
align_as(other) -> Tensor
Permutes the dimensions of the "self" tensor to match the
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
dimension order in the "other" tensor, adding size-one dims for
any new names.
This operation is useful for explicit broadcasting by names (see
examples).
All of the dims of "self" must be named in order to use this
method. The resulting tensor is a view on the original tensor.
All dimension names of "self" must be present in "other.names".
"other" may contain named dimensions that are not in
"self.names"; the output tensor has a size-one dimension for
each of those new names.
To align a tensor to a specific order, use "align_to()".
Examples:
# Example 1: Applying a mask
>>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H')
>>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C'))
>>> imgs.masked_fill_(mask.align_as(imgs), 0)
# Example 2: Applying a per-channel-scale
>>> def scale_channels(input, scale):
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
def scale_channels(input, scale):
>>> scale = scale.refine_names('C')
>>> return input * scale.align_as(input)
>>> num_channels = 3
>>> scale = torch.randn(num_channels, names=('C',))
>>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C'))
>>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W'))
>>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D'))
# scale_channels is agnostic to the dimension order of the input
>>> scale_channels(imgs, scale)
>>> scale_channels(more_imgs, scale)
>>> scale_channels(videos, scale)
Warning:
The named tensor API is experimental and subject to change.
align_to(*names)
Permutes the dimensions of the "self" tensor to match the order
specified in "names", adding size-one dims for any new names.
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
All of the dims of "self" must be named in order to use this
method. The resulting tensor is a view on the original tensor.
All dimension names of "self" must be present in "names".
"names" may contain additional names that are not in
"self.names"; the output tensor has a size-one dimension for
each of those new names.
"names" may contain up to one Ellipsis ("..."). The Ellipsis is
expanded to be equal to all dimension names of "self" that are
not mentioned in "names", in the order that they appear in
"self".
Python 2 does not support Ellipsis but one may use a string
literal instead ("'...'").
Parameters:
**names** (*iterable of str*) -- The desired dimension
ordering of the output tensor. May contain up to one Ellipsis
that is expanded to all unmentioned dim names of "self".
Examples:
>>> tensor = torch.randn(2, 2, 2, 2, 2, 2)
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
named_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')
# Move the F and E dims to the front while keeping the rest in order
>>> named_tensor.align_to('F', 'E', ...)
Warning:
The named tensor API is experimental and subject to change.
flatten(dims, out_dim) -> Tensor
Flattens "dims" into a single dimension with name "out_dim".
All of *dims* must be consecutive in order in the "self" tensor,
but not necessary contiguous in memory.
Examples:
>>> imgs = torch.randn(32, 3, 128, 128, names=('N', 'C', 'H', 'W'))
>>> flat_imgs = imgs.flatten(['C', 'H', 'W'], 'features')
>>> flat_imgs.names, flat_imgs.shape
(('N', 'features'), torch.Size([32, 49152]))
Warning:
The named tensor API is experimental and subject to change.
| https://pytorch.org/docs/stable/named_tensor.html | pytorch docs |
torch.futures
This package provides a "Future" type that encapsulates an
asynchronous execution and a set of utility functions to simplify
operations on "Future" objects. Currently, the "Future" type is
primarily used by the Distributed RPC Framework.
class torch.futures.Future(*, devices=None)
Wrapper around a "torch._C.Future" which encapsulates an
asynchronous execution of a callable, e.g. "rpc_async()". It also
exposes a set of APIs to add callback functions and set results.
Warning:
GPU support is a beta feature, subject to changes.
add_done_callback(callback)
Append the given callback function to this "Future", which will
be run when the "Future" is completed. Multiple callbacks can
be added to the same "Future", but the order in which they will
be executed cannot be guaranteed. The callback must take one
argument, which is the reference to this "Future". The callback
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
function can use the "value()" method to get the value. Note
that if this "Future" is already completed, the given callback
will be run inline.
We recommend that you use the "then()" method as it provides a
way to synchronize after your callback has completed.
"add_done_callback" can be cheaper if your callback does not
return anything. But both "then()" and "add_done_callback" use
the same callback registration API under the hood.
With respect to GPU tensors, this method behaves in the same way
as "then()".
Parameters:
**callback** ("Future") -- a "Callable" that takes in one
argument, which is the reference to this "Future".
Note:
Note that if the callback function throws, either through the
original future being completed with an exception and calling
"fut.wait()", or through other code in the callback, error
handling must be carefully taken care of. For example, if this
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
callback later completes additional futures, those futures are
not marked as completed with an error and the user is
responsible for handling completion/waiting on those futures
independently.
Example::
>>> def callback(fut):
... print("This will run after the future has finished.")
... print(fut.wait())
>>> fut = torch.futures.Future()
>>> fut.add_done_callback(callback)
>>> fut.set_result(5)
This will run after the future has finished.
5
done()
Return "True" if this "Future" is done. A "Future" is done if it
has a result or an exception.
If the value contains tensors that reside on GPUs,
"Future.done()" will return "True" even if the asynchronous
kernels that are populating those tensors haven't yet completed
running on the device, because at such stage the result is
already usable, provided one performs the appropriate
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
synchronizations (see "wait()").
Return type:
bool
set_exception(result)
Set an exception for this "Future", which will mark this
"Future" as completed with an error and trigger all attached
callbacks. Note that when calling wait()/value() on this
"Future", the exception set here will be raised inline.
Parameters:
**result** (*BaseException*) -- the exception for this
"Future".
Example::
>>> fut = torch.futures.Future()
>>> fut.set_exception(ValueError("foo"))
>>> fut.wait()
Traceback (most recent call last):
...
ValueError: foo
set_result(result)
Set the result for this "Future", which will mark this "Future"
as completed and trigger all attached callbacks. Note that a
"Future" cannot be marked completed twice.
If the result contains tensors that reside on GPUs, this method
can be called even if the asynchronous kernels that are
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
populating those tensors haven't yet completed running on the
device, provided that the streams on which those kernels were
enqueued are set as the current ones when this method is called.
Put simply, it's safe to call this method immediately after
launching those kernels, without any additional synchronization,
as long as one doesn't change streams in between. This method
will record events on all the relevant current streams and will
use them to ensure proper scheduling for all the consumers of
this "Future".
Parameters:
**result** (*object*) -- the result object of this "Future".
Example::
>>> import threading
>>> import time
>>> def slow_set_future(fut, value):
... time.sleep(0.5)
... fut.set_result(value)
>>> fut = torch.futures.Future()
>>> t = threading.Thread(
... target=slow_set_future,
... args=(fut, torch.ones(2) * 3)
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
... args=(fut, torch.ones(2) * 3)
... )
>>> t.start()
>>> print(fut.wait())
tensor([3., 3.])
>>> t.join()
then(callback)
Append the given callback function to this "Future", which will
be run when the "Future" is completed. Multiple callbacks can
be added to the same "Future", but the order in which they will
be executed cannot be guaranteed (to enforce a certain order
consider chaining: "fut.then(cb1).then(cb2)"). The callback must
take one argument, which is the reference to this "Future". The
callback function can use the "value()" method to get the value.
Note that if this "Future" is already completed, the given
callback will be run immediately inline.
If the "Future"'s value contains tensors that reside on GPUs,
the callback might be invoked while the async kernels that are
populating those tensors haven't yet finished executing on the
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
device. However, the callback will be invoked with some
dedicated streams set as current (fetched from a global pool)
which will be synchronized with those kernels. Hence any
operation performed by the callback on these tensors will be
scheduled on the device after the kernels complete. In other
words, as long as the callback doesn't switch streams, it can
safely manipulate the result without any additional
synchronization. This is similar to the non-blocking behavior of
"wait()".
Similarly, if the callback returns a value that contains tensors
that reside on a GPU, it can do so even if the kernels that are
producing these tensors are still running on the device, as long
as the callback didn't change streams during its execution. If
one wants to change streams, one must be careful to re-
synchronize them with the original streams, that is, those that
were current when the callback was invoked.
Parameters:
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
Parameters:
callback ("Callable") -- a "Callable" that takes this
"Future" as the only argument.
Returns:
A new "Future" object that holds the return value of the
"callback" and will be marked as completed when the given
"callback" finishes.
Return type:
*Future*[*S*]
Note:
Note that if the callback function throws, either through the
original future being completed with an exception and calling
"fut.wait()", or through other code in the callback, the
future returned by "then" will be marked appropriately with
the encountered error. However, if this callback later
completes additional futures, those futures are not marked as
completed with an error and the user is responsible for
handling completion/waiting on those futures independently.
Example::
>>> def callback(fut):
... print(f"RPC return value is {fut.wait()}.")
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
fut = torch.futures.Future()
>>> # The inserted callback will print the return value when
>>> # receiving the response from "worker1"
>>> cb_fut = fut.then(callback)
>>> chain_cb_fut = cb_fut.then(
... lambda x : print(f"Chained cb done. {x.wait()}")
... )
>>> fut.set_result(5)
RPC return value is 5.
Chained cb done. None
value()
Obtain the value of an already-completed future.
This method should only be called after a call to "wait()" has
completed, or inside a callback function passed to "then()". In
other cases this "Future" may not yet hold a value and calling
"value()" could fail.
If the value contains tensors that reside on GPUs, then this
method will *not* perform any additional synchronization. This
should be done beforehand, separately, through a call to
"wait()" (except within callbacks, for which it's already being
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
taken care of by "then()").
Returns:
The value held by this "Future". If the function (callback or
RPC) creating the value has thrown an error, this "value()"
method will also throw an error.
Return type:
*T*
wait()
Block until the value of this "Future" is ready.
If the value contains tensors that reside on GPUs, then an
additional synchronization is performed with the kernels
(executing on the device) which may be asynchronously populating
those tensors. Such sync is non-blocking, which means that
"wait()" will insert the necessary instructions in the current
streams to ensure that further operations enqueued on those
streams will be properly scheduled after the async kernels but,
once that is done, "wait()" will return, even if those kernels
are still running. No further synchronization is required when
accessing and using the values, as long as one doesn't change
streams.
| https://pytorch.org/docs/stable/futures.html | pytorch docs |
streams.
Returns:
The value held by this "Future". If the function (callback or
RPC) creating the value has thrown an error, this "wait"
method will also throw an error.
Return type:
*T*
torch.futures.collect_all(futures)
Collects the provided "Future" objects into a single combined
"Future" that is completed when all of the sub-futures are
completed.
Parameters:
futures (list) -- a list of "Future" objects.
Returns:
Returns a "Future" object to a list of the passed in Futures.
Return type:
Future[List[Future]]
Example::
>>> fut0 = torch.futures.Future()
>>> fut1 = torch.futures.Future()
>>> fut = torch.futures.collect_all([fut0, fut1])
>>> fut0.set_result(0)
>>> fut1.set_result(1)
>>> fut_list = fut.wait()
>>> print(f"fut0 result = {fut_list[0].wait()}")
fut0 result = 0
>>> print(f"fut1 result = {fut_list[1].wait()}")
fut1 result = 1 | https://pytorch.org/docs/stable/futures.html | pytorch docs |
fut1 result = 1
torch.futures.wait_all(futures)
Waits for all provided futures to be complete, and returns the list
of completed values. If any of the futures encounters an error, the
method will exit early and report the error not waiting for other
futures to complete.
Parameters:
futures (list) -- a list of "Future" object.
Returns:
A list of the completed "Future" results. This method will throw
an error if "wait" on any "Future" throws.
Return type:
List | https://pytorch.org/docs/stable/futures.html | pytorch docs |
torch.config
torch.config.show()
Return a human-readable string with descriptions of the
configuration of PyTorch.
torch.config.parallel_info()
Returns detailed string with parallelization settings | https://pytorch.org/docs/stable/config_mod.html | pytorch docs |
torch.profiler
Overview
PyTorch Profiler is a tool that allows the collection of performance
metrics during training and inference. Profiler's context manager API
can be used to better understand what model operators are the most
expensive, examine their input shapes and stack traces, study device
kernel activity and visualize the execution trace.
Note:
An earlier version of the API in "torch.autograd" module is
considered legacy and will be deprecated.
API Reference
class torch.profiler._KinetoProfile(*, activities=None, record_shapes=False, profile_memory=False, with_stack=False, with_flops=False, with_modules=False, experimental_config=None)
Low-level profiler wrap the autograd profile
Parameters:
* activities (iterable) -- list of activity groups (CPU,
CUDA) to use in profiling, supported values:
"torch.profiler.ProfilerActivity.CPU",
"torch.profiler.ProfilerActivity.CUDA". Default value: | https://pytorch.org/docs/stable/profiler.html | pytorch docs |
ProfilerActivity.CPU and (when available)
ProfilerActivity.CUDA.
* **record_shapes** (*bool*) -- save information about
operator's input shapes.
* **profile_memory** (*bool*) -- track tensor memory
allocation/deallocation.
* **with_stack** (*bool*) -- record source information (file and
line number) for the ops.
* **with_flops** (*bool*) -- use formula to estimate the FLOPS
of specific operators (matrix multiplication and 2D
convolution).
* **with_modules** (*bool*) -- record module hierarchy
(including function names) corresponding to the callstack of
the op. e.g. If module A's forward call's module B's forward
which contains an aten::add op, then aten::add's module
hierarchy is A.B Note that this support exist, at the moment,
only for TorchScript models and not eager mode models.
* **experimental_config** (*_ExperimentalConfig*) -- A set of
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
experimental options used by profiler libraries like Kineto.
Note, backward compatibility is not guaranteed.
Note:
This API is experimental and subject to change in the
future.Enabling shape and stack tracing results in additional
overhead. When record_shapes=True is specified, profiler will
temporarily hold references to the tensors; that may further
prevent certain optimizations that depend on the reference count
and introduce extra tensor copies.
add_metadata(key, value)
Adds a user defined metadata with a string key and a string
value into the trace file
add_metadata_json(key, value)
Adds a user defined metadata with a string key and a valid json
value into the trace file
events()
Returns the list of unaggregated profiler events, to be used in
the trace callback or after the profiling is finished
export_chrome_trace(path)
Exports the collected trace in Chrome JSON format.
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
export_stacks(path, metric='self_cpu_time_total')
Save stack traces in a file in a format suitable for
visualization.
Parameters:
* **path** (*str*) -- save stacks file to this location;
* **metric** (*str*) -- metric to use: "self_cpu_time_total"
or "self_cuda_time_total"
Note:
Example of using FlameGraph tool:
* git clone https://github.com/brendangregg/FlameGraph
* cd FlameGraph
* ./flamegraph.pl --title "CPU time" --countname "us."
profiler.stacks > perf_viz.svg
key_averages(group_by_input_shape=False, group_by_stack_n=0)
Averages events, grouping them by operator name and (optionally)
input shapes and stack.
Note:
To use shape/stack functionality make sure to set
record_shapes/with_stack when creating profiler context
manager.
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
manager.
class torch.profiler.profile(*, activities=None, schedule=None, on_trace_ready=None, record_shapes=False, profile_memory=False, with_stack=False, with_flops=False, with_modules=False, experimental_config=None, use_cuda=None)
Profiler context manager.
Parameters:
* activities (iterable) -- list of activity groups (CPU,
CUDA) to use in profiling, supported values:
"torch.profiler.ProfilerActivity.CPU",
"torch.profiler.ProfilerActivity.CUDA". Default value:
ProfilerActivity.CPU and (when available)
ProfilerActivity.CUDA.
* **schedule** (*Callable*) -- callable that takes step (int) as
a single parameter and returns "ProfilerAction" value that
specifies the profiler action to perform at each step.
* **on_trace_ready** (*Callable*) -- callable that is called at
each step when "schedule" returns
"ProfilerAction.RECORD_AND_SAVE" during the profiling.
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
record_shapes (bool) -- save information about
operator's input shapes.
profile_memory (bool) -- track tensor memory
allocation/deallocation.
with_stack (bool) -- record source information (file and
line number) for the ops.
with_flops (bool) -- use formula to estimate the FLOPs
(floating point operations) of specific operators (matrix
multiplication and 2D convolution).
with_modules (bool) -- record module hierarchy
(including function names) corresponding to the callstack of
the op. e.g. If module A's forward call's module B's forward
which contains an aten::add op, then aten::add's module
hierarchy is A.B Note that this support exist, at the moment,
only for TorchScript models and not eager mode models.
experimental_config (_ExperimentalConfig) -- A set of
experimental options used for Kineto library features. Note,
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
backward compatibility is not guaranteed.
* **use_cuda** (*bool*) --
Deprecated since version 1.8.1: use "activities" instead.
Note:
Use "schedule()" to generate the callable schedule. Non-default
schedules are useful when profiling long training jobs and allow
the user to obtain multiple traces at the different iterations of
the training process. The default schedule simply records all the
events continuously for the duration of the context manager.
Note:
Use "tensorboard_trace_handler()" to generate result files for T
ensorBoard:"on_trace_ready=torch.profiler.tensorboard_trace_hand
ler(dir_name)"After profiling, result files can be found in the
specified directory. Use the command:"tensorboard --logdir
dir_name"to see the results in TensorBoard. For more information,
see PyTorch Profiler TensorBoard Plugin
Note:
Enabling shape and stack tracing results in additional overhead.
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
When record_shapes=True is specified, profiler will temporarily
hold references to the tensors; that may further prevent certain
optimizations that depend on the reference count and introduce
extra tensor copies.
Examples:
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
]
) as p:
code_to_profile()
print(p.key_averages().table(
sort_by="self_cuda_time_total", row_limit=-1))
Using the profiler's "schedule", "on_trace_ready" and "step"
functions:
# Non-default profiler schedule allows user to turn profiler on and off
# on different iterations of the training loop;
# trace_handler is called every time a new trace becomes available
def trace_handler(prof):
print(prof.key_averages().table(
sort_by="self_cuda_time_total", row_limit=-1))
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
prof.export_chrome_trace("/tmp/test_trace_" + str(prof.step_num) + ".json")
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
],
# In this example with wait=1, warmup=1, active=2,
# profiler will skip the first step/iteration,
# start warming up on the second, record
# the third and the forth iterations,
# after which the trace will become available
# and on_trace_ready (when set) is called;
# the cycle repeats starting with the next step
schedule=torch.profiler.schedule(
wait=1,
warmup=1,
active=2),
on_trace_ready=trace_handler
# on_trace_ready=torch.profiler.tensorboard_trace_handler('./log')
# used when outputting for tensorboard
) as p:
for iter in range(N):
| https://pytorch.org/docs/stable/profiler.html | pytorch docs |
for iter in range(N):
code_iteration_to_profile(iter)
# send a signal to the profiler that the next iteration has started
p.step()
step()
Signals the profiler that the next profiling step has started.
class torch.profiler.ProfilerAction(value)
Profiler actions that can be taken at the specified intervals
class torch.profiler.ProfilerActivity
Members:
CPU
CUDA
property name
torch.profiler.schedule(*, wait, warmup, active, repeat=0, skip_first=0)
Returns a callable that can be used as profiler "schedule"
argument. The profiler will skip the first "skip_first" steps, then
wait for "wait" steps, then do the warmup for the next "warmup"
steps, then do the active recording for the next "active" steps and
then repeat the cycle starting with "wait" steps. The optional
number of cycles is specified with the "repeat" parameter, the zero | https://pytorch.org/docs/stable/profiler.html | pytorch docs |
value means that the cycles will continue until the profiling is
finished.
Return type:
Callable
torch.profiler.tensorboard_trace_handler(dir_name, worker_name=None, use_gzip=False)
Outputs tracing files to directory of "dir_name", then that
directory can be directly delivered to tensorboard as logdir.
"worker_name" should be unique for each worker in distributed
scenario, it will be set to '[hostname]_[pid]' by default.
Intel Instrumentation and Tracing Technology APIs
torch.profiler.itt.is_available()
Check if ITT feature is available or not
torch.profiler.itt.mark(msg)
Describe an instantaneous event that occurred at some point.
Parameters:
msg (str) -- ASCII message to associate with the event.
torch.profiler.itt.range_push(msg)
Pushes a range onto a stack of nested range span. Returns zero-
based depth of the range that is started.
Parameters: | https://pytorch.org/docs/stable/profiler.html | pytorch docs |
Parameters:
msg (str) -- ASCII message to associate with range
torch.profiler.itt.range_pop()
Pops a range off of a stack of nested range spans. Returns the
zero-based depth of the range that is ended. | https://pytorch.org/docs/stable/profiler.html | pytorch docs |
Distributed RPC Framework
The distributed RPC framework provides mechanisms for multi-machine
model training through a set of primitives to allow for remote
communication, and a higher-level API to automatically differentiate
models split across several machines.
Warning:
APIs in the RPC package are stable. There are multiple ongoing work
items to improve performance and error handling, which will ship in
future releases.
Warning:
CUDA support was introduced in PyTorch 1.9 and is still a beta
feature. Not all features of the RPC package are yet compatible with
CUDA support and thus their use is discouraged. These unsupported
features include: RRefs, JIT compatibility, dist autograd and dist
optimizer, and profiling. These shortcomings will be addressed in
future releases.
Note:
Please refer to PyTorch Distributed Overview for a brief
introduction to all features related to distributed training.
Basics | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
Basics
The distributed RPC framework makes it easy to run functions remotely,
supports referencing remote objects without copying the real data
around, and provides autograd and optimizer APIs to transparently run
backward and update parameters across RPC boundaries. These features
can be categorized into four sets of APIs.
Remote Procedure Call (RPC) supports running a function on the
specified destination worker with the given arguments and getting
the return value back or creating a reference to the return value.
There are three main RPC APIs: "rpc_sync()" (synchronous),
"rpc_async()" (asynchronous), and "remote()" (asynchronous and
returns a reference to the remote return value). Use the
synchronous API if the user code cannot proceed without the return
value. Otherwise, use the asynchronous API to get a future, and
wait on the future when the return value is needed on the caller.
The "remote()" API is useful when the requirement is to create
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
something remotely but never need to fetch it to the caller.
Imagine the case that a driver process is setting up a parameter
server and a trainer. The driver can create an embedding table on
the parameter server and then share the reference to the embedding
table with the trainer, but itself will never use the embedding
table locally. In this case, "rpc_sync()" and "rpc_async()" are no
longer appropriate, as they always imply that the return value will
be returned to the caller immediately or in the future.
Remote Reference (RRef) serves as a distributed shared pointer
to a local or remote object. It can be shared with other workers
and reference counting will be handled transparently. Each RRef
only has one owner and the object only lives on that owner. Non-
owner workers holding RRefs can get copies of the object from the
owner by explicitly requesting it. This is useful when a worker
needs to access some data object, but itself is neither the creator
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
(the caller of "remote()") or the owner of the object. The
distributed optimizer, as we will discuss below, is one example of
such use cases.
Distributed Autograd stitches together local autograd engines
on all the workers involved in the forward pass, and automatically
reach out to them during the backward pass to compute gradients.
This is especially helpful if the forward pass needs to span
multiple machines when conducting, e.g., distributed model parallel
training, parameter-server training, etc. With this feature, user
code no longer needs to worry about how to send gradients across
RPC boundaries and in which order should the local autograd engines
be launched, which can become quite complicated where there are
nested and inter-dependent RPC calls in the forward pass.
Distributed Optimizer's constructor takes a "Optimizer()"
(e.g., "SGD()", "Adagrad()", etc.) and a list of parameter RRefs,
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
creates an "Optimizer()" instance on each distinct RRef owner, and
updates parameters accordingly when running "step()". When you have
distributed forward and backward passes, parameters and gradients
will be scattered across multiple workers, and hence it requires an
optimizer on each of the involved workers. Distributed Optimizer
wraps all those local optimizers into one, and provides a concise
constructor and "step()" API.
RPC
Before using RPC and distributed autograd primitives, initialization
must take place. To initialize the RPC framework we need to use
"init_rpc()" which would initialize the RPC framework, RRef framework
and distributed autograd.
torch.distributed.rpc.init_rpc(name, backend=None, rank=- 1, world_size=None, rpc_backend_options=None)
Initializes RPC primitives such as the local RPC agent and
distributed autograd, which immediately makes the current process
ready to send and receive RPCs.
Parameters: | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
ready to send and receive RPCs.
Parameters:
* name (str) -- a globally unique name of this node.
(e.g., "Trainer3", "ParameterServer2", "Master", "Worker1")
Name can only contain number, alphabet, underscore, colon,
and/or dash, and must be shorter than 128 characters.
* **backend** (*BackendType**, **optional*) -- The type of RPC
backend implementation. Supported values is
"BackendType.TENSORPIPE" (the default). See Backends for more
information.
* **rank** (*int*) -- a globally unique id/rank of this node.
* **world_size** (*int*) -- The number of workers in the group.
* **rpc_backend_options** (*RpcBackendOptions**, **optional*) --
The options passed to the RpcAgent constructor. It must be an
agent-specific subclass of "RpcBackendOptions" and contains
agent-specific initialization configurations. By default, for
all agents, it sets the default timeout to 60 seconds and
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
performs the rendezvous with an underlying process group
initialized using "init_method = "env://"", meaning that
environment variables "MASTER_ADDR" and "MASTER_PORT" need to
be set properly. See Backends for more information and find
which options are available.
The following APIs allow users to remotely execute functions as well
as create references (RRefs) to remote data objects. In these APIs,
when passing a "Tensor" as an argument or a return value, the
destination worker will try to create a "Tensor" with the same meta
(i.e., shape, stride, etc.). We intentionally disallow transmitting
CUDA tensors because it might crash if the device lists on source and
destination workers do not match. In such cases, applications can
always explicitly move the input tensors to CPU on the caller and move
it to the desired devices on the callee if necessary.
Warning:
TorchScript support in RPC is a prototype feature and subject to | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
change. Since v1.5.0, "torch.distributed.rpc" supports calling
TorchScript functions as RPC target functions, and this will help
improve parallelism on the callee side as executing TorchScript
functions does not require GIL.
torch.distributed.rpc.rpc_sync(to, func, args=None, kwargs=None, timeout=- 1.0)
Make a blocking RPC call to run function "func" on worker "to". RPC
messages are sent and received in parallel to execution of Python
code. This method is thread-safe.
Parameters:
* to (str or WorkerInfo or int) --
name/rank/"WorkerInfo" of the destination worker.
* **func** (*Callable*) -- a callable function, such as Python
callables, builtin operators (e.g. "add()") and annotated
TorchScript functions.
* **args** (*tuple*) -- the argument tuple for the "func"
invocation.
* **kwargs** (*dict*) -- is a dictionary of keyword arguments
for the "func" invocation.
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
for the "func" invocation.
* **timeout** (*float**, **optional*) -- timeout in seconds to
use for this RPC. If the RPC does not complete in this amount
of time, an exception indicating it has timed out will be
raised. A value of 0 indicates an infinite timeout, i.e. a
timeout error will never be raised. If not provided, the
default value set during initialization or with
"_set_rpc_timeout" is used.
Returns:
Returns the result of running "func" with "args" and "kwargs".
Example::
Make sure that "MASTER_ADDR" and "MASTER_PORT" are set properly
on both workers. Refer to "init_process_group()" API for more
details. For example,
export MASTER_ADDR=localhost export MASTER_PORT=5678
Then run the following code in two different processes:
>>> # On worker 0:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
ret = rpc.rpc_sync("worker1", torch.add, args=(torch.ones(2), 3))
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
Below is an example of running a TorchScript function using RPC.
>>> # On both workers:
>>> @torch.jit.script
>>> def my_script_add(t1, t2):
>>> return torch.add(t1, t2)
>>> # On worker 0:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> ret = rpc.rpc_sync("worker1", my_script_add, args=(torch.ones(2), 3))
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
torch.distributed.rpc.rpc_async(to, func, args=None, kwargs=None, timeout=- 1.0)
Make a non-blocking RPC call to run function "func" on worker "to". | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
RPC messages are sent and received in parallel to execution of
Python code. This method is thread-safe. This method will
immediately return a "Future" that can be awaited on.
Parameters:
* to (str or WorkerInfo or int) --
name/rank/"WorkerInfo" of the destination worker.
* **func** (*Callable*) -- a callable function, such as Python
callables, builtin operators (e.g. "add()") and annotated
TorchScript functions.
* **args** (*tuple*) -- the argument tuple for the "func"
invocation.
* **kwargs** (*dict*) -- is a dictionary of keyword arguments
for the "func" invocation.
* **timeout** (*float**, **optional*) -- timeout in seconds to
use for this RPC. If the RPC does not complete in this amount
of time, an exception indicating it has timed out will be
raised. A value of 0 indicates an infinite timeout, i.e. a
timeout error will never be raised. If not provided, the
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
default value set during initialization or with
"_set_rpc_timeout" is used.
Returns:
Returns a "Future" object that can be waited on. When completed,
the return value of "func" on "args" and "kwargs" can be
retrieved from the "Future" object.
Warning:
Using GPU tensors as arguments or return values of "func" is not
supported since we don't support sending GPU tensors over the
wire. You need to explicitly copy GPU tensors to CPU before using
them as arguments or return values of "func".
Warning:
The "rpc_async" API does not copy storages of argument tensors
until sending them over the wire, which could be done by a
different thread depending on the RPC backend type. The caller
should make sure that the contents of those tensors stay intact
until the returned "Future" completes.
Example::
Make sure that "MASTER_ADDR" and "MASTER_PORT" are set properly | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
on both workers. Refer to "init_process_group()" API for more
details. For example,
export MASTER_ADDR=localhost export MASTER_PORT=5678
Then run the following code in two different processes:
>>> # On worker 0:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> fut1 = rpc.rpc_async("worker1", torch.add, args=(torch.ones(2), 3))
>>> fut2 = rpc.rpc_async("worker1", min, args=(1, 2))
>>> result = fut1.wait() + fut2.wait()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
Below is an example of running a TorchScript function using RPC.
>>> # On both workers:
>>> @torch.jit.script
>>> def my_script_add(t1, t2):
>>> return torch.add(t1, t2)
>>> # On worker 0:
>>> import torch.distributed.rpc as rpc
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> fut = rpc.rpc_async("worker1", my_script_add, args=(torch.ones(2), 3))
>>> ret = fut.wait()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
torch.distributed.rpc.remote(to, func, args=None, kwargs=None, timeout=- 1.0)
Make a remote call to run "func" on worker "to" and return an
"RRef" to the result value immediately. Worker "to" will be the
owner of the returned "RRef", and the worker calling "remote" is a
user. The owner manages the global reference count of its "RRef",
and the owner "RRef" is only destructed when globally there are no
living references to it.
Parameters:
* to (str or WorkerInfo or int) --
name/rank/"WorkerInfo" of the destination worker. | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
func (Callable) -- a callable function, such as Python
callables, builtin operators (e.g. "add()") and annotated
TorchScript functions.
args (tuple) -- the argument tuple for the "func"
invocation.
kwargs (dict) -- is a dictionary of keyword arguments
for the "func" invocation.
timeout (float, optional) -- timeout in seconds for
this remote call. If the creation of this "RRef" on worker
"to" is not successfully processed on this worker within this
timeout, then the next time there is an attempt to use the
RRef (such as "to_here()"), a timeout will be raised
indicating this failure. A value of 0 indicates an infinite
timeout, i.e. a timeout error will never be raised. If not
provided, the default value set during initialization or with
"_set_rpc_timeout" is used.
Returns:
A user "RRef" instance to the result value. Use the blocking API | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
"torch.distributed.rpc.RRef.to_here()" to retrieve the result
value locally.
Warning:
The "remote" API does not copy storages of argument tensors until
sending them over the wire, which could be done by a different
thread depending on the RPC backend type. The caller should make
sure that the contents of those tensors stay intact until the
returned RRef is confirmed by the owner, which can be checked
using the "torch.distributed.rpc.RRef.confirmed_by_owner()" API.
Warning:
Errors such as timeouts for the "remote" API are handled on a
best-effort basis. This means that when remote calls initiated by
"remote" fail, such as with a timeout error, we take a best-
effort approach to error handling. This means that errors are
handled and set on the resulting RRef on an asynchronous basis.
If the RRef has not been used by the application before this
handling (such as "to_here" or fork call), then future uses of
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
the "RRef" will appropriately raise errors. However, it is
possible that the user application will use the "RRef" before the
errors are handled. In this case, errors may not be raised as
they have not yet been handled.
Example:
Make sure that ``MASTER_ADDR`` and ``MASTER_PORT`` are set properly
on both workers. Refer to :meth:`~torch.distributed.init_process_group`
API for more details. For example,
export MASTER_ADDR=localhost
export MASTER_PORT=5678
Then run the following code in two different processes:
>>> # On worker 0:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3))
>>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1))
>>> x = rref1.to_here() + rref2.to_here()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
Below is an example of running a TorchScript function using RPC.
>>> # On both workers:
>>> @torch.jit.script
>>> def my_script_add(t1, t2):
>>> return torch.add(t1, t2)
>>> # On worker 0:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> rref = rpc.remote("worker1", my_script_add, args=(torch.ones(2), 3))
>>> rref.to_here()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
torch.distributed.rpc.get_worker_info(worker_name=None)
Get "WorkerInfo" of a given worker name. Use this "WorkerInfo" to
avoid passing an expensive string on every invocation.
Parameters:
worker_name (str) -- the string name of a worker. If | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
"None", return the the id of the current worker. (default
"None")
Returns:
"WorkerInfo" instance for the given "worker_name" or
"WorkerInfo" of the current worker if "worker_name" is "None".
torch.distributed.rpc.shutdown(graceful=True, timeout=0)
Perform a shutdown of the RPC agent, and then destroy the RPC
agent. This stops the local agent from accepting outstanding
requests, and shuts down the RPC framework by terminating all RPC
threads. If "graceful=True", this will block until all local and
remote RPC processes reach this method and wait for all outstanding
work to complete. Otherwise, if "graceful=False", this is a local
shutdown, and it does not wait for other RPC processes to reach
this method.
Warning:
For "Future" objects returned by "rpc_async()", "future.wait()"
should not be called after "shutdown()".
Parameters:
graceful (bool) -- Whether to do a graceful shutdown or | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
not. If True, this will 1) wait until there is no pending system
messages for "UserRRefs" and delete them; 2) block until all
local and remote RPC processes have reached this method and wait
for all outstanding work to complete.
Example::
Make sure that "MASTER_ADDR" and "MASTER_PORT" are set properly
on both workers. Refer to "init_process_group()" API for more
details. For example,
export MASTER_ADDR=localhost export MASTER_PORT=5678
Then run the following code in two different processes:
>>> # On worker 0:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> # do some work
>>> result = rpc.rpc_sync("worker1", torch.add, args=(torch.ones(1), 1))
>>> # ready to shutdown
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
Subsets and Splits