text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
FSDP.set_state_dict_type(
>>> model,
>>> StateDictType.SHARDED_STATE_DICT,
>>> ShardedStateDictConfig(offload_to_cpu=True),
>>> )
>>> checkpoint = model.state_dict()
Parameters:
* **module** (*torch.nn.Module*) -- Root module.
* **state_dict_type** (*StateDictType*) -- the desired
"state_dict_type" to set.
* **state_dict_config** (*Optional**[**StateDictConfig**]*)
-- the configuration for the target "state_dict_type".
Return type:
*Tuple*[*StateDictType*, *StateDictConfig*]
static shard_full_optim_state_dict(full_optim_state_dict, model, optim_input=None, optim=None)
Shards the full optimizer state dict "full_optim_state_dict" by
remapping the state to flattened parameters instead of
unflattened parameters and restricting to only this rank's part
of the optimizer state. The first argument should be the return
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
value of "full_optim_state_dict()".
Example:
>>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
>>> model, optim = ...
>>> full_osd = FSDP.full_optim_state_dict(model, optim)
>>> torch.save(full_osd, PATH)
>>> # Define new model with possibly different world size
>>> new_model, new_optim = ...
>>> full_osd = torch.load(PATH)
>>> sharded_osd = FSDP.shard_full_optim_state_dict(full_osd, new_model)
>>> new_optim.load_state_dict(sharded_osd)
Note:
Both "shard_full_optim_state_dict()" and
"scatter_full_optim_state_dict()" may be used to get the
sharded optimizer state dict to load. Assuming that the full
optimizer state dict resides in CPU memory, the former
requires each rank to have the full dict in CPU memory, where
each rank individually shards the dict without any
communication, while the latter requires only rank 0 to have
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
the full dict in CPU memory, where rank 0 moves each shard to
GPU memory (for NCCL) and communicates it to ranks
appropriately. Hence, the former has higher aggregate CPU
memory cost, while the latter has higher communication cost.
Parameters:
* **full_optim_state_dict** (*Dict**[**str**, **Any**]*) --
Optimizer state dict corresponding to the unflattened
parameters and holding the full non-sharded optimizer
state.
* **model** (*torch.nn.Module*) -- Root module (which may or
may not be a "FullyShardedDataParallel" instance) whose
parameters correspond to the optimizer state in
"full_optim_state_dict".
* **optim_input**
(*Optional**[**Union**[**List**[**Dict**[**str**,
**Any**]**]**, **Iterable**[**torch.nn.Parameter**]**]**]*)
-- Input passed into the optimizer representing either a
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
"list" of parameter groups or an iterable of parameters; if
"None", then this method assumes the input was
"model.parameters()". This argument is deprecated, and
there is no need to pass it in anymore. (Default: "None")
* **optim** (*Optional**[**torch.optim.Optimizer**]*) --
Optimizer that will load the state dict returned by this
method. This is the preferred argument to use over
"optim_input". (Default: "None")
Returns:
The full optimizer state dict now remapped to flattened
parameters instead of unflattened parameters and restricted
to only include this rank's part of the optimizer state.
Return type:
Dict[str, Any]
static sharded_optim_state_dict(model, optim, group=None)
The API is similar to "full_optim_state_dict()" but this API
chunks all non-zero-dimension states to "ShardedTensor" to save
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
memory. This API should only be used when the model "state_dict"
is derived with the context manager "with
state_dict_type(SHARDED_STATE_DICT):".
For the detailed usage, refer to "full_optim_state_dict()".
Warning:
The returned state dict contains "ShardedTensor" and cannot be
directly used by the regular "optim.load_state_dict".
Return type:
*Dict*[str, *Any*]
static state_dict_type(module, state_dict_type, state_dict_config=None)
A context manager to set the "state_dict_type" of all the
descendant FSDP modules of the target module. This context
manager has the same functions as "set_state_dict_type()". Read
the document of "set_state_dict_type()" for the detail.
Example:
>>> model = DDP(FSDP(...))
>>> with FSDP.state_dict_type(
>>> model,
>>> StateDictType.SHARDED_STATE_DICT,
>>> ):
>>> checkpoint = model.state_dict()
Parameters:
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
Parameters:
* module (torch.nn.Module) -- Root module.
* **state_dict_type** (*StateDictType*) -- the desired
"state_dict_type" to set.
* **state_dict_config** (*Optional**[**StateDictConfig**]*)
-- the configuration for the target "state_dict_type".
Return type:
*Generator*
static summon_full_params(module, recurse=True, writeback=True, rank0_only=False, offload_to_cpu=False, with_grads=False)
A context manager to expose full params for FSDP instances. Can
be useful *after* forward/backward for a model to get the params
for additional processing or checking. It can take a non-FSDP
module and will summon full params for all contained FSDP
modules as well as their children, depending on the "recurse"
argument.
Note:
This can be used on inner FSDPs.
Note:
This can *not* be used within a forward or backward pass. Nor
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
can forward and backward be started from within this context.
Note:
Parameters will revert to their local shards after the context
manager exits, storage behavior is the same as forward.
Note:
The full parameters can be modified, but only the portion
corresponding to the local param shard will persist after the
context manager exits (unless "writeback=False", in which case
changes will be discarded). In the case where FSDP does not
shard the parameters, currently only when "world_size == 1",
or "NO_SHARD" config, the modification is persisted regardless
of "writeback".
Note:
This method works on modules which are not FSDP themselves but
may contain multiple independent FSDP units. In that case, the
given arguments will apply to all contained FSDP units.
Warning:
Note that "rank0_only=True" in conjunction with
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
"writeback=True" is not currently supported and will raise an
error. This is because model parameter shapes would be
different across ranks within the context, and writing to them
can lead to inconsistency across ranks when the context is
exited.
Warning:
Note that "offload_to_cpu" and "rank0_only=False" will result
in full parameters being redundantly copied to CPU memory for
GPUs that reside on the same machine, which may incur the risk
of CPU OOM. It is recommended to use "offload_to_cpu" with
"rank0_only=True".
Parameters:
* **recurse** (*bool**, **Optional*) -- recursively summon
all params for nested FSDP instances (default: True).
* **writeback** (*bool**, **Optional*) -- if "False",
modifications to params are discarded after the context
manager exits; disabling this can be slightly more
efficient (default: True)
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
efficient (default: True)
* **rank0_only** (*bool**, **Optional*) -- if "True", full
parameters are materialized on only global rank 0. This
means that within the context, only rank 0 will have full
parameters and the other ranks will have sharded
parameters. Note that setting "rank0_only=True" with
"writeback=True" is not supported, as model parameter
shapes will be different across ranks within the context,
and writing to them can lead to inconsistency across ranks
when the context is exited.
* **offload_to_cpu** (*bool**, **Optional*) -- If "True",
full parameters are offloaded to CPU. Note that this
offloading currently only occurs if the parameter is
sharded (which is only not the case for world_size = 1 or
"NO_SHARD" config). It is recommended to use
"offload_to_cpu" with "rank0_only=True" to avoid redundant
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
copies of model parameters being offloaded to the same CPU
memory.
* **with_grads** (*bool**, **Optional*) -- If "True",
gradients are also unsharded with the parameters.
Currently, this is only supported when passing
"use_orig_params=True" to the FSDP constructor and
"offload_to_cpu=False" to this method. (Default: "False")
Return type:
*Generator*
class torch.distributed.fsdp.BackwardPrefetch(value)
This configures explicit backward prefetching, which can improve
throughput but may slightly increase peak memory usage.
For NCCL backend, any collectives, even if issued in different
streams, contend for the same per-device NCCL stream, which is why
the relative order in which the collectives are issued matters for
overlapping. The different backward prefetching settings correspond
to different orderings.
"BACKWARD_PRE": This prefetches the next set of parameters before
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
the current set of parameter's gradient computation. This
improves backward pass throughput by overlapping communication
(next all-gather) and computation (current gradient computation).
"BACKWARD_POST": This prefetches the next set of parameters after
the current set of parameter's gradient computation. This may
improve backward pass throughput by overlapping communication
(current reduce-scatter) and computation (next gradient
computation). Specifically, the next all-gather is reordered to
be before the current reduce-scatter.
Note:
If the increase in peak memory usage from prefetching is an
issue, you may consider passing "limit_all_gathers=True" to the
FSDP constructor, which may help reduce peak memory usage in some
cases.
class torch.distributed.fsdp.ShardingStrategy(value)
This specifies the sharding strategy to be used for distributed
training by "FullyShardedDataParallel". | https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
training by "FullyShardedDataParallel".
"FULL_SHARD": Parameters, gradients, and optimizer states are
sharded. For the parameters, this strategy unshards (via all-
gather) before the forward, reshards after the forward, unshards
before the backward computation, and reshards after the backward
computation. For gradients, it synchronizes and shards them (via
reduce-scatter) after the backward computation. The sharded
optimizer states are updated locally per rank.
"SHARD_GRAD_OP": Gradients and optimizer states are sharded
during computation, and additionally, parameters are sharded
outside computation. For the parameters, this strategy unshards
before the forward, does not reshard them after the forward, and
only reshards them after the backward computation. The sharded
optimizer states are updated locally per rank. Inside
"no_sync()", the parameters are not resharded after the backward
computation.
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
computation.
"NO_SHARD": Parameters, gradients, and optimizer states are not
sharded but instead replicated across ranks similar to PyTorch's
"DistributedDataParallel" API. For gradients, this strategy
synchronizes them (via all-reduce) after the backward
computation. The unsharded optimizer states are updated locally
per rank.
"HYBRID_SHARD": Apply "FULL_SHARD" within a node, and replicate
parameters across
nodes. This results in reduced communication volume as
expensive all-gathers and reduce-scatters are only done within
a node, which can be more performant for medium -sized models.
"_HYBRID_SHARD_ZERO2": Apply "SHARD_GRAD_OP" within a node, and
replicate parameters across
nodes. This is like "HYBRID_SHARD", except this may provide
even higher throughput since the unsharded parameters are not
freed after the forward pass, saving the all-gathers in the
pre-backward.
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
pre-backward.
class torch.distributed.fsdp.MixedPrecision(param_dtype=None, reduce_dtype=None, buffer_dtype=None, keep_low_precision_grads=False, cast_forward_inputs=False, cast_root_forward_inputs=True)
This configures FSDP-native mixed precision training.
Variables:
* param_dtype (torch.dtype) -- This specifies the dtype
for model parameters, inputs (when "cast_forward_inputs" or
"cast_root_forward_inputsis set toTrue"), and therefore
the dtype for computation. However, outside the forward and
backward passes, parameters are in full precision. Model
checkpointing always happens in full precision.
* **reduce_dtype** (*torch.dtype*) -- This specifies the dtype
for gradient reduction, which is permitted to differ from
"param_dtype".
* **buffer_dtype** (*torch.dtype*) -- This specifies the dtype
for buffers. FSDP does not shard buffers, casts them to
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
"buffer_dtype" in the first forward pass, and keeps them in
that dtype thereafter. Model checkpointing always happens in
full precision.
* **keep_low_precision_grads** (*bool*) -- This specifies
whether to upcast gradients back to the full parameter
precision after the backward pass. This may be set to "False"
to save memory if using custom optimizers that can perform the
optimizer step in "reduce_dtype". (Default: "False")
* **cast_forward_inputs** (*bool*) -- Cast floating point
tensors in the forward arguments and keyword arguments to
"param_dtype". (Default: "False")
* **cast_root_forward_inputs** (*bool*) -- Cast floating point
tensors in the forward arguments and keyword arguments to
"param_dtype" for the root FSDP instance. It takes precedence
over "cast_forward_inputs" for the root FSDP instance.
(Default: "True")
Note:
This API is experimental and subject to change.
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
Note:
Only floating point tensors are cast to their specified dtypes.
Note:
In "summon_full_params", parameters are forced to full precision,
but buffers are not.
Note:
"state_dict" checkpoints parameters and buffers in full
precision. For buffers, this is only supported for
"StateDictType.FULL_STATE_DICT".
Note:
Each low precision dtype must be specified explicitly. For
example, "MixedPrecision(reduce_dtype=torch.float16)" only
specifies the reduction dtype to be low precision, and FSDP will
not cast parameters or buffers.
Note:
If a "reduce_dtype" is not specified, then gradient reduction
happens in "param_dtype" if specified or the original parameter
dtype otherwise.
Note:
If the user passes a model with "BatchNorm" modules and an
"auto_wrap_policy" to the FSDP constructor, then FSDP will
disable mixed precision for "BatchNorm" modules by wrapping them
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
separately in their own FSDP instance with mixed precision
disabled. This is due to some missing low precision "BatchNorm"
kernels. If the user does not use an "auto_wrap_policy", then the
user must take care to not use mixed precision for FSDP instances
containing "BatchNorm" modules.
Note:
"MixedPrecision" has "cast_root_forward_inputs=True" and
"cast_forward_inputs=False" by default. For the root FSDP
instance, its "cast_root_forward_inputs" takes precedence over
its "cast_forward_inputs". For non-root FSDP instances, their
"cast_root_forward_inputs" values are ignored. The default
setting is sufficient for the typical case where each FSDP
instance has the same "MixedPrecision" configuration and only
needs to cast inputs to the "param_dtype" at the beginning of the
model's forward pass.
Note:
For nested FSDP instances with different "MixedPrecision"
configurations, we recommend setting individual
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
"cast_forward_inputs" values to configure casting inputs or not
before each instance's forward. In such a case, since the casts
happen before each FSDP instance's forward, a parent FSDP
instance should have its non-FSDP submodules run before its FSDP
submodules to avoid the activation dtype being changed due to a
different "MixedPrecision" configuration.Example:
>>> model = nn.Sequential(nn.Linear(3, 3), nn.Linear(3, 3))
>>> model[1] = FSDP(
>>> model[1],
>>> mixed_precision=MixedPrecision(param_dtype=torch.float16, cast_forward_inputs=True),
>>> )
>>> model = FSDP(
>>> model,
>>> mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, cast_forward_inputs=True),
>>> )
The above shows a working example. On the other hand, if
"model[1]" were replaced with "model[0]", meaning that the
submodule using different "MixedPrecision" ran its forward first,
| https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
then "model[1]" would incorrectly see "float16" activations
instead of "bfloat16" ones.
class torch.distributed.fsdp.CPUOffload(offload_params=False)
This configures CPU offloading.
Variables:
offload_params (bool) -- This specifies whether to offload
parameters to CPU when not involved in computation. If enabled,
this implicitly offloads gradients to CPU as well. This is to
support the optimizer step, which requires parameters and
gradients to be on the same device. | https://pytorch.org/docs/stable/fsdp.html | pytorch docs |
torch.utils.cpp_extension
torch.utils.cpp_extension.CppExtension(name, sources, args, *kwargs)
Creates a "setuptools.Extension" for C++.
Convenience method that creates a "setuptools.Extension" with the
bare minimum (but often sufficient) arguments to build a C++
extension.
All arguments are forwarded to the "setuptools.Extension"
constructor.
-[ Example ]-
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension
setup(
... name='extension',
... ext_modules=[
... CppExtension(
... name='extension',
... sources=['extension.cpp'],
... extra_compile_args=['-g']),
... ],
... cmdclass={
... 'build_ext': BuildExtension
... })
torch.utils.cpp_extension.CUDAExtension(name, sources, args, *kwargs)
Creates a "setuptools.Extension" for CUDA/C++. | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
Creates a "setuptools.Extension" for CUDA/C++.
Convenience method that creates a "setuptools.Extension" with the
bare minimum (but often sufficient) arguments to build a CUDA/C++
extension. This includes the CUDA include path, library path and
runtime library.
All arguments are forwarded to the "setuptools.Extension"
constructor.
-[ Example ]-
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
setup(
... name='cuda_extension',
... ext_modules=[
... CUDAExtension(
... name='cuda_extension',
... sources=['extension.cpp', 'extension_kernel.cu'],
... extra_compile_args={'cxx': ['-g'],
... 'nvcc': ['-O2']})
... ],
... cmdclass={
... 'build_ext': BuildExtension
... })
Compute capabilities:
By default the extension will be compiled to run on all archs of | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
the cards visible during the building process of the extension,
plus PTX. If down the road a new card is installed the extension
may need to be recompiled. If a visible card has a compute
capability (CC) that's newer than the newest version for which your
nvcc can build fully-compiled binaries, Pytorch will make nvcc fall
back to building kernels with the newest version of PTX your nvcc
does support (see below for details on PTX).
You can override the default behavior using TORCH_CUDA_ARCH_LIST
to explicitly specify which CCs you want the extension to support:
TORCH_CUDA_ARCH_LIST="6.1 8.6" python build_my_extension.py
TORCH_CUDA_ARCH_LIST="5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX" python
build_my_extension.py
The +PTX option causes extension kernel binaries to include PTX
instructions for the specified CC. PTX is an intermediate
representation that allows kernels to runtime-compile for any CC >=
the specified CC (for example, 8.6+PTX generates PTX that can | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
runtime-compile for any GPU with CC >= 8.6). This improves your
binary's forward compatibility. However, relying on older PTX to
provide forward compat by runtime-compiling for newer CCs can
modestly reduce performance on those newer CCs. If you know exact
CC(s) of the GPUs you want to target, you're always better off
specifying them individually. For example, if you want your
extension to run on 8.0 and 8.6, "8.0+PTX" would work functionally
because it includes PTX that can runtime-compile for 8.6, but "8.0
8.6" would be better.
Note that while it's possible to include all supported archs, the
more archs get included the slower the building process will be, as
it will build a separate kernel image for each arch.
Note that CUDA-11.5 nvcc will hit internal compiler error while
parsing torch/extension.h on Windows. To workaround the issue, move
python binding logic to pure C++ file.
Example use:
#include at::Tensor | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
include at::Tensor
SigmoidAlphaBlendForwardCuda(....)
Instead of:
#include torch::Tensor
SigmoidAlphaBlendForwardCuda(...)
Currently open issue for nvcc bug:
https://github.com/pytorch/pytorch/issues/69460 Complete workaround
code example: https://github.com/facebookresearch/pytorch3d/commit
/cb170ac024a949f1f9614ffe6af1c38d972f7d48
Relocatable device code linking:
If you want to reference device symbols across compilation units
(across object files), the object files need to be built with
relocatable device code (-rdc=true or -dc). An exception to this
rule is "dynamic parallelism" (nested kernel launches) which is
not used a lot anymore. Relocatable device code is less optimized
so it needs to be used only on object files that need it. Using
-dlto (Device Link Time Optimization) at the device code
compilation step and dlink step help reduce the protentional perf | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
degradation of -rdc. Note that it needs to be used at both steps
to be useful.
If you have rdc objects you need to have an extra -dlink
(device linking) step before the CPU symbol linking step. There is
also a case where -dlink is used without -rdc: when an
extension is linked against a static lib containing rdc-compiled
objects like the NVSHMEM
library.
Note: Ninja is required to build a CUDA Extension with RDC linking.
-[ Example ]-
CUDAExtension(
... name='cuda_extension',
... sources=['extension.cpp', 'extension_kernel.cu'],
... dlink=True,
... dlink_libraries=["dlink_lib"],
... extra_compile_args={'cxx': ['-g'],
... 'nvcc': ['-O2', '-rdc=true']})
torch.utils.cpp_extension.BuildExtension(args, *kwargs)
A custom "setuptools" build extension .
This "setuptools.build_ext" subclass takes care of passing the | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
minimum required compiler flags (e.g. "-std=c++17") as well as
mixed C++/CUDA compilation (and support for CUDA files in general).
When using "BuildExtension", it is allowed to supply a dictionary
for "extra_compile_args" (rather than the usual list) that maps
from languages ("cxx" or "nvcc") to a list of additional compiler
flags to supply to the compiler. This makes it possible to supply
different flags to the C++ and CUDA compiler during mixed
compilation.
"use_ninja" (bool): If "use_ninja" is "True" (default), then we
attempt to build using the Ninja backend. Ninja greatly speeds up
compilation compared to the standard "setuptools.build_ext".
Fallbacks to the standard distutils backend if Ninja is not
available.
Note:
By default, the Ninja backend uses #CPUS + 2 workers to build the
extension. This may use up too many resources on some systems.
One can control the number of workers by setting the *MAX_JOBS*
| https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
environment variable to a non-negative number.
torch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True)
Loads a PyTorch C++ extension just-in-time (JIT).
To load an extension, a Ninja build file is emitted, which is used
to compile the given sources into a dynamic library. This library
is subsequently loaded into the current Python process as a module
and returned from this function, ready for use.
By default, the directory to which the build file is emitted and
the resulting library compiled to is
"/torch_extensions/", where "" is the temporary
folder on the current platform and "" the name of the
extension. This location can be overridden in two ways. First, if
the "TORCH_EXTENSIONS_DIR" environment variable is set, it replaces | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
"/torch_extensions" and all extensions will be compiled into
subfolders of this directory. Second, if the "build_directory"
argument to this function is supplied, it overrides the entire
path, i.e. the library will be compiled into that folder directly.
To compile the sources, the default system compiler ("c++") is
used, which can be overridden by setting the "CXX" environment
variable. To pass additional arguments to the compilation process,
"extra_cflags" or "extra_ldflags" can be provided. For example, to
compile your extension with optimizations, pass
"extra_cflags=['-O3']". You can also use "extra_cflags" to pass
further include directories.
CUDA support with mixed compilation is provided. Simply pass CUDA
source files (".cu" or ".cuh") along with other sources. Such files
will be detected and compiled with nvcc rather than the C++
compiler. This includes passing the CUDA lib64 directory as a | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
library directory, and linking "cudart". You can pass additional
flags to nvcc via "extra_cuda_cflags", just like with
"extra_cflags" for C++. Various heuristics for finding the CUDA
install directory are used, which usually work fine. If not,
setting the "CUDA_HOME" environment variable is the safest option.
Parameters:
* name -- The name of the extension to build. This MUST be
the same as the name of the pybind11 module!
* **sources** (*Union**[**str**, **List**[**str**]**]*) -- A
list of relative or absolute paths to C++ source files.
* **extra_cflags** -- optional list of compiler flags to forward
to the build.
* **extra_cuda_cflags** -- optional list of compiler flags to
forward to nvcc when building CUDA sources.
* **extra_ldflags** -- optional list of linker flags to forward
to the build.
* **extra_include_paths** -- optional list of include
directories to forward to the build.
| https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
directories to forward to the build.
* **build_directory** -- optional path to use as build
workspace.
* **verbose** -- If "True", turns on verbose logging of load
steps.
* **with_cuda** (*Optional**[**bool**]*) -- Determines whether
CUDA headers and libraries are added to the build. If set to
"None" (default), this value is automatically determined based
on the existence of ".cu" or ".cuh" in "sources". Set it to
*True`* to force CUDA headers and libraries to be included.
* **is_python_module** -- If "True" (default), imports the
produced shared library as a Python module. If "False",
behavior depends on "is_standalone".
* **is_standalone** -- If "False" (default) loads the
constructed extension into the process as a plain dynamic
library. If "True", build a standalone executable.
Returns:
Returns the loaded PyTorch extension as a Python module. | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
If "is_python_module" is "False" and "is_standalone" is "False":
Returns nothing. (The shared library is loaded into the
process as a side effect.)
If "is_standalone" is "True".
Return the path to the executable. (On Windows,
TORCH_LIB_PATH is added to the PATH environment variable as a
side effect.)
Return type:
If "is_python_module" is "True"
-[ Example ]-
from torch.utils.cpp_extension import load
module = load(
... name='extension',
... sources=['extension.cpp', 'extension_kernel.cu'],
... extra_cflags=['-O2'],
... verbose=True)
torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True) | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
Loads a PyTorch C++ extension just-in-time (JIT) from string
sources.
This function behaves exactly like "load()", but takes its sources
as strings rather than filenames. These strings are stored to files
in the build directory, after which the behavior of "load_inline()"
is identical to "load()".
See the tests for good examples of using this function.
Sources may omit two required parts of a typical non-inline C++
extension: the necessary header includes, as well as the (pybind11)
binding code. More precisely, strings passed to "cpp_sources" are
first concatenated into a single ".cpp" file. This file is then
prepended with "#include ".
Furthermore, if the "functions" argument is supplied, bindings will
be automatically generated for each function specified. "functions"
can either be a list of function names, or a dictionary mapping
from function names to docstrings. If a list is given, the name of | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
each function is used as its docstring.
The sources in "cuda_sources" are concatenated into a separate
".cu" file and prepended with "torch/types.h", "cuda.h" and
"cuda_runtime.h" includes. The ".cpp" and ".cu" files are compiled
separately, but ultimately linked into a single library. Note that
no bindings are generated for functions in "cuda_sources" per se.
To bind to a CUDA kernel, you must create a C++ function that calls
it, and either declare or define this C++ function in one of the
"cpp_sources" (and include its name in "functions").
See "load()" for a description of arguments omitted below.
Parameters:
* cpp_sources -- A string, or list of strings, containing
C++ source code.
* **cuda_sources** -- A string, or list of strings, containing
CUDA source code.
* **functions** -- A list of function names for which to
generate function bindings. If a dictionary is given, it
| https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
should map function names to docstrings (which are otherwise
just the function names).
* **with_cuda** -- Determines whether CUDA headers and libraries
are added to the build. If set to "None" (default), this value
is automatically determined based on whether "cuda_sources" is
provided. Set it to "True" to force CUDA headers and libraries
to be included.
* **with_pytorch_error_handling** -- Determines whether pytorch
error and warning macros are handled by pytorch instead of
pybind. To do this, each function "foo" is called via an
intermediary "_safe_foo" function. This redirection might
cause issues in obscure cases of cpp. This flag should be set
to "False" when this redirect causes issues.
-[ Example ]-
from torch.utils.cpp_extension import load_inline
source = """
at::Tensor sin_add(at::Tensor x, at::Tensor y) {
return x.sin() + y.sin();
}
"""
| https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
return x.sin() + y.sin();
}
"""
module = load_inline(name='inline_extension',
... cpp_sources=[source],
... functions=['sin_add'])
Note:
By default, the Ninja backend uses #CPUS + 2 workers to build the
extension. This may use up too many resources on some systems.
One can control the number of workers by setting the *MAX_JOBS*
environment variable to a non-negative number.
torch.utils.cpp_extension.include_paths(cuda=False)
Get the include paths required to build a C++ or CUDA extension.
Parameters:
cuda (bool) -- If True, includes CUDA-specific include
paths.
Returns:
A list of include path strings.
Return type:
List[str]
torch.utils.cpp_extension.get_compiler_abi_compatibility_and_version(compiler)
Determine if the given compiler is ABI-compatible with PyTorch
alongside its version.
Parameters: | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
alongside its version.
Parameters:
compiler (str) -- The compiler executable name to check
(e.g. "g++"). Must be executable in a shell process.
Returns:
A tuple that contains a boolean that defines if the compiler is
(likely) ABI-incompatible with PyTorch, followed by a
TorchVersion string that contains the compiler version
separated by dots.
Return type:
Tuple[bool, TorchVersion]
torch.utils.cpp_extension.verify_ninja_availability()
Raises "RuntimeError" if ninja build system is not available on the
system, does nothing otherwise.
torch.utils.cpp_extension.is_ninja_available()
Returns "True" if the ninja build system is available on the
system, "False" otherwise. | https://pytorch.org/docs/stable/cpp_extension.html | pytorch docs |
Installing TorchDynamo
This section describes how to install TorchDynamo. TorchDynamo is
included in the nightly binaries of PyTorch. For more information, see
Getting Started.
Requirements
You must have the following prerequisites to use TorchDynamo:
A Linux or macOS environment
Python 3.8 (recommended). Python 3.7 through 3.10 are supported and
tested. Make sure to have a development version of Python installed
locally as well.
GPU/CUDA Requirements
To use GPU back ends, and in particular Triton, make sure that the
CUDA that you have installed locally matches the PyTorch version you
are running.
The following command installs GPU PyTorch + TorchDynamo along with
GPU TorchDynamo dependencies (for CUDA 11.7):
pip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117
CPU requirements
There are no additional requirements for CPU TorchDynamo. CPU | https://pytorch.org/docs/stable/dynamo/installation.html | pytorch docs |
TorchDynamo is included in the nightly versions of PyTorch. To
install, run the following command:
pip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu
Install from Local Source
Alternatively, you can build PyTorch from source, which has
TorchDynamo included.
To install GPU TorchDynamo dependencies, run "make triton" in the
PyTorch repo root directory.
Verify Installation
If you built PyTorch from source, then you can run the following
commands (from the PyTorch repo root directory) to check that
TorchDynamo is installed correctly:
cd tools/dynamo
python verify_dynamo.py
If you do not have the PyTorch source locally, you can alternatively
copy the script ("tools/dynamo/verify_dynamo.py") from the PyTorch
repository and run it locally.
Docker Installation
We also provide all the required dependencies in the PyTorch nightly
binaries which you can download with the following command: | https://pytorch.org/docs/stable/dynamo/installation.html | pytorch docs |
docker pull ghcr.io/pytorch/pytorch-nightly
And for ad hoc experiments just make sure that your container has
access to all your GPUs:
docker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash | https://pytorch.org/docs/stable/dynamo/installation.html | pytorch docs |
TorchDynamo Overview
TorchDynamo is a Python-level JIT compiler designed to make
unmodified PyTorch programs faster. TorchDynamo hooks into the frame
evaluation API in CPython (PEP 523) to dynamically modify Python
bytecode right before it is executed. It rewrites Python bytecode in
order to extract sequences of PyTorch operations into an FX Graph
which is then just-in-time compiled with a customizable backend. It
creates this FX Graph through bytecode analysis and is designed to mix
Python execution with compiled backends to get the best of both worlds
â usability and performance.
TorchDynamo makes it easy to experiment with different compiler
backends to make PyTorch code faster with a single line decorator
"torch._dynamo.optimize()"
[image]
TorchInductor is one of the backends supported by TorchDynamo Graph
into Triton for GPUs or C++/OpenMP for CPUs. We have a training
performance dashboard that provides performance comparison for | https://pytorch.org/docs/stable/dynamo/index.html | pytorch docs |
different training backends. You can read more in the TorchInductor
post on PyTorch dev-discuss.
See also:
TorchDynamo deep-dive video
dev-discuss topics
| https://pytorch.org/docs/stable/dynamo/index.html | pytorch docs |
Guards Overview
From a UX perspective, TorchDynamo is very easy to use. The user
invokes "torchdynamo.optimize" as an annotation:
@torchdynamo.optimize(my_compiler)
def fn_foo(bar):
Where a complete example looks like this:
from typing import List
import torch
import torchdynamo
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
print("my_compiler() called with FX graph:")
gm.graph.print_tabular()
return gm.forward # return a python callable
@torchdynamo.optimize(my_compiler)
def toy_example(a, b):
x = a / (torch.abs(a) + 1)
if b.sum() < 0:
b = b * -1
return x * b
for _ in range(100):
toy_example(torch.randn(10), torch.randn(10))
This allows TorchDynamo to capture the interpreted Python frames, grab
any and all relevant information, and speed things up wherever it can.
The speedup comes from a few places, and can be rather dependent on | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
the backend (my_compiler in the example above) provided, but the one
speedup that is important in this section is caching. Caching
itself is not a direct speedup but a critical enablement that prevents
recompilation. We dig a hole with dynamo, and caching allows us to get
out. It enables us to hold perf neutrality while then enabling
backends - the true source of our speedups.
With even a pass-through no-op backend provided:
def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):
return gm.forward
We can see TorchDynamo speeding up Python execution even on regular
Python, not just PyTorch.
Caching and Guards Overview
TorchDynamo operates through caching transformed (by TorchDynamo) user
bytecode. When TorchDynamo receives a frame for evaluation, it checks
if the objects referenced in the frame have changed in certain
ways, and if not, TorchDynamo reads the previously transformed user | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
bytecode to evaluate it. In this section, we will focus on how we can
identify whether or not the objects referenced in the frame have
changed. This is a critical piece of functionality in TorchDynamo,
because it drives the entire invalidation lifecycle. This
functionality is called guards.
At a very high level, the flow can be summarized like this:
TorchDynamo receives a Python frame.
It converts the frame (1) passing it through instruction
translation.
For the objects captured in (2), TorchDynamo creates tracking
objects that are: * tracked on an output graph, which is an
internal specialization of a torch.fx.Tracer * guards
TorchDynamo processes the guard objects created in (3), turning
them into a generated Python function, check_fn, associated with
a piece of code.
The check_fn is evaluated whenever we encounter this code a
subsequent time - if a check_fn passes and evaluates to True,
TorchDynamo identifies the code in the cache and the code
| https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
encountered here as same, and can be safely used. If it fails and
evaluates to False, TorchDynamo identifies the code in the cache
as not valid, and can be thrown out in favor of a new entry,
through recompilation or a graph break.
Python Frame Evaluation and PEP 523
The functionality of TorchDynamo is based on PEP 523.
TorchDynamo installs a frame evaluation function on Python by using
_PyInterpreterState_SetEvalFrameFunc. TorchDynamo has a hook where
Python can hand control back to us during evaluation.
The function we have installed is "convert_frame" or
"convert_frame_assert" in the "nopython=True" case, but glossing over
that nuance for now, letâs take a look at "convert_frame_assert", as
"convert_frame" proxies to it.
We can find it on line 20 of convert_frame.py, with a signature as
follows:
def convert_frame_assert(compiler_fn: Callable, one_graph=True):
This function wraps the entry point of where Python invokes | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
TorchDynamo with a frame:
def _convert_frame_assert(frame: types.FrameType, cache_size: int):
Here is what this function does:
Checks if it has seen this "code"(see: f_code here) before and
exits early if it did.
Checks if the code is an unsupported case.
Checks if the "cache_size" (second arg above) crosses the limit
defined in the config, "cache_size_limit". If it has, the function
drops the frame and logs warnings. This helps to avoid constant
recompilation of a frame as it generally means that the frame is
hot in an unexpected way and caching it produces needless overhead,
as it is likely to get evicted the next time it is encountered.
Passes the frame, alongside a function that creates an
"InstructionTranslator" through bytecode transformation, via
"transform_code_object". A few crucial things happen under the hood
here:
New code is produced through "transform_code_object".
An FX tracer named "output" is produced through
| https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
"InstructionTranslator".
This can be a bit confusing, as "InstructionTranslator" is not
an *fx* tracer, but its stored in a variable named tracer, and
its output***is***an `fx`tracer.*
The function produces guards and stores them on "output" above.
The function produces "output_instructions" and stores them on
"output" above.
The function maps the newly produced transformed code to the
initial code it read off the frame. This mapping is worth
remembering, we will refer to it much later on below where we
cover guard failures.
Using the transformed code from 4.1 and the guards from 4.3, the
function produces a GuardedCode.
Now that we have learned about frame evaluation, letâs review
"InstructionTranslator", and see how it turns the frame we handed it
over into TorchDynamo internal types.
InstructionTranslator
InstructionTranslator does a lot! We wonât cover the details of | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
everything it does, but most importantly for this document, it
produces a mapping of "symbolic_locals" which maintains a mapping from
the frameâs "f_locals" to TorchDynamo internal Variable objects (more
on these in a moment. "symbolic_locals" is filled via traversing the
frameâs locals:
self.symbolic_locals = collections.OrderedDict(
(k, VariableBuilder(self, LocalSource(k))(f_locals[k]))
for k in vars
if k in f_locals
)
The important component here is the invocation of a call into
"VariableBuilder". "VariableBuilder"âs call implementation proxies
into a function called "_wrap", which in turn both constructs
instances of "VariableTracker" and calls "make_guards" on them. More
on that later.
This mapping, in turn, is critical as each Variable has associated
guards, which are then passed to "self.output", the instance of
"OutputGraph", an fx tracer, mentioned in 4.2 of the section above. If
you recall, this "OutputGraph", stored in a variable called "output" | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
is where our guards are stored before being passed on to become
"GuardedCode"
How does "InstructionTranslator" do this? At the heart of it, there is
a loop that is pumped, which drives a function "step".
"step" is just that - a single processing step, taking exactly one
instruction and doing something with it.
Note:
These are real instructions processed by TorchDynamoâs
"transform_code_object", and it is pretty cool.
Note:
This section purposely skips the details of dis.get_instructions.
For the example above, here is a snippet of a what a few
"Instruction"'s may look like:
Instruction(opcode=124, opname='LOAD_FAST', arg=0, argval='b', offset=32, starts_line=8, is_jump_target=True, target=None)
Instruction(opcode=100, opname='LOAD_CONST', arg=3, argval=-1, offset=34, starts_line=None, is_jump_target=False, target=None)
Instruction(opcode=20, opname='BINARY_MULTIPLY', arg=None, argval=None, offset=36, starts_line=None, is_jump_target=False, target=None) | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
This is the core functionality of this function. Take a look at the
"opname", and then take a look at this little snippet from inside
"step";
if not hasattr(self, inst.opname):
unimplemented(f"missing: {inst.opname}")
getattr(self, inst.opname)(inst)
As we can see, the function checks if the current class, the
"InstructionTranslator" has an attribute set matching the operator
name (for example, "LOAD_CONST"). If it does, the function invokes it,
passing the whole instruction object in. If it does not, the function
drops the frame as unimplemented.
For the "LOAD_CONST" example, we can see that we do indeed support it,
with a relatively straightforward definition:
def LOAD_CONST(self, inst):
self.push(ConstantVariable(value=inst.argval))
We can see that this function creates a new instance of the class
"ConstantVariable" , with a value, in our example case, -1, and then
pushes it onto the stack.
There are dozens of such methods - see "symbolic_convert.py" for all | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
of them. Generally, we implement as many matching methods to Python
bytecode instructions as possible.
Across both the logic downstream of "step" and the logic from invoking
"VariableBuilder" - we now have a lot of "VariableTracker"s and of
course, weâve spoken about creating guards quiet a bit. Letâs dig into
what Variables are, and get a little closer to understanding guards.
Variables
A "ConstantVariable" is an instance of"VariableTracker".
"VariableTracker" represents a tracked Python local or stack value.
When it comes to representing an object inside TorchDynamo, a
"VariableTracker" does exactly what it says - it tracks a given
variable. It is an extremely flexible class, but there are a few
points to keep in mind:
It manages the "guard" relationship around the underlying object
through:
"make_guard"
"replace_guards"
"add_guard(s)"
"propagate" - "propagate(*vars: List[List["VariableTracker"]])" -
| https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
Perhaps the most important of all, in that it combines guards from
all the provided "VariableTracker" instances passed in. It visits
the guards and combines the guards from these onto itself.
It acts as a proxy on behalf of the underlying object, implementing
methods for the rest of TorchDynamo to get information about the
tracked object:
"call_method"
"call_function"
"python_type"
"as_proxy"
"is/as_python_proxy"
It stores the variable "source" of type "Source", from
"torchdynamo/source.py". This source type is a relatively self
contained class that helps us organize and bookkeep where the
original source came from, and helps provide convenience methods for
things like getting the name, and importantly for us, producing
guards.
And this class ("VariableTracker") is built around subclassing,
somewhere between a full Abstract Base Class and fully fleshed out
class - it leaves many methods raising "NotImplementedError" - with | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
reliance on subclasses. See "torchdynamo/variables/" for all
subclasses to fulfill contracts and custom behaviors.
Knowing what we know now, we can see an example of how an instruction
from "dis", "BUILD_TUPLE":
"BUILD_TUPLE(count)" Creates a tuple consuming count items from the
stack, and pushes the resulting tuple onto the stack.
In our case, our signature will be a little different due to the way
we create "Instruction" objects, but the gist of it will be the same.
Instead of passing in "count", we pass in an object with a little
extra bookkeeping, and of course, we deal with turning regular old
python objects into TorchDynamo notions:
def BUILD_TUPLE(self, inst):
items = self.popn(inst.argval)
options = VariableTracker.propagate(items)
self.push(TupleVariable(items, **options))
Here is what this code does:
The function reads "argval", which in this case, is analogous to
"counts" in the pydoc for the equivalent instruction.
| https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
The function "popn" the items, in this case, the signature is "def
popn(self, n: int) -> List[TensorVariable]:" this hints at an
underlying contract - we are returning "TensorVariables". If we
take a closer look at "sybmolic_convert.py" and
"InstructionTranslatorBase"/"InstructionTranslator"we see that the
only thing pushed onto and popped from our stack are
"VariableTracker"s.
The function calls "VariableTracker.propagate". This takes the
guards from every single item popped off the stack in 2, and
recursively traverses it and combines all the guards into
"options": "py return { "guards": guards, }"
The function then makes a new instance of a "VariableTracker",
"TupleVariable"out of the "items" and "options". This then allows
us to install all the appropriate guards from the "items" that make
up the new "TupleVariable"
Note:
Where did the first guards come from? Propagation is a good
technique, but we need something created before it can be | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
propagated. "VariableBuilder" calls "make_guards" as it creates
"VariableTracker" instances, from "f_locals". This in turn calls
into the "source", to have it create guards.
After all this, bytecode translation is done and we are one step
closer to producing "GuardedCode". We now understand how locals become
"VariableTracker"s, how instructions are handled, and where guards are
called on for creation. Before we can go into seeing how code and
guards are combined into a GuardedCode object, we need to dig a little
bit into those "make_guard" and "source.make_guard" calls above. We
can then understand, what was going on when we made guards alongside,
and on, "VariableTracker" instances.
Making Guards
Guards are just Python objects, of the class "Guard". Let's look at
them in more detail.
Looking at the definition of the dataclass (and therefore, ctor
signature), we see that it has a name, a source, and a create
function.
@dataclasses.dataclass
class Guard:
name: str | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
class Guard:
name: str
source: GuardSource
create_fn: Callable
The name should be the name of the variable.
The source here is an enum indicating what kind of source the guard
belongs to.
Note:
Not to be confused with "Source" and the other types in "source.py",
as stored on "VariableTracker".
"create_fn" provides the main functionality to transition from a
simple dataclass to actually producing valid Python code to be invoked
for knowing whether or not things have changed in between invocations,
and whether we can safely read from the code cache or not.
The most common code paths for getting an instance of a guard are
through "make_guards" on "VariableTracker".
"make_guards"->source.make_guard->return Guard(self.name(),
self.guard_source(), fn)
Or, in a concrete example:
...
elif istype(value, range):
guards = self.make_guards(GuardBuilder.EQUALS_MATCH)
return RangeVariable(value=value, guards=guards) | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
Since "source" was set at the construction time of this
"VariableTracker", all that was needed here was to provide the "fn",
"GuardBuilder.EQUALS_MATCH" to the "create_fn" field.
This "create_fn" must be a method on "GuardBuilder". The reason for
this becomes apparent in our next step. Once we have all the guards
created for a frame, we move on to "CheckFunctionManager" and
"compile_check_fn".
Before the "convert_frame" function can produce a "GuardedCode", it
needs to run the "CheckFunctionManager", with all the guards, to
produce a "check_fn" which will then, in turn get passed in alongside
the code into "GuardedCode". This is the same "check_fn" that we store
in our cache entry, and the same one we run to know whether or not to
retrieve the code stored alongside. For reference, here is that code:
static CacheEntry create_cache_entry(CacheEntry next,
PyObject guarded_code) {
CacheEntry e = (CacheEntry *)malloc(sizeof(CacheEntry)); | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
DEBUG_NULL_CHECK(e);
e->check_fn = PyObject_GetAttrString(guarded_code, "check_fn");
NULL_CHECK(e->check_fn);
e->code = (PyCodeObject *)PyObject_GetAttrString(guarded_code, "code");
NULL_CHECK(e->code);
e->next = next;
return e;
}
We now know how a "check_fn" function is used, and who makes it, and
what it is composed of, but what we do not yet know is how. How does a
list of "Guard" objects become a function we can run later on?
First, we iterate these guards:
for guard in sorted(guards or [], key=Guard.sort_key):
if not config.guard_nn_modules and guard.is_nn_module():
continue
guard.create(local_builder, global_builder)
Calling "guard.create" runs that "create_fn" we set on the "Guard"
class above (donât confuse it with the "check_fn" we are working on
producing, the names are similar, so it can get a little confusing).
In our example above, our "create_fn" is "GuardBuilder.EQUALS_MATCH". | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
So we are now invoking it, passing in the "self", the guard itself,
in.
The signature is: "def EQUALS_MATCH(self, guard: Guard):"
And internally to that function, we can use the "name" on the guard to
get back our original object, querying it for data and type
information, which in turn gets us to the most important bit:
appending code.
At its simplest, "EQUALS_MATCH" appends just one line of code:
"self.code.append(f"{ref} == {val!r}")". Where "ref" is the name of
the variable, and "val" is the value. It might produce code like this:
y == 2
This is a basic example. But if we append a few other kinds of
"GuardBuilder" functions and then combine them all with "and" in
between each statement (as we do), we might get something like this:
guardedcode.valid and _check_type_id(y, 94367738391392) and y == 2 and ___check_tensors(x)
Here is what this code performs:
A check for ".valid"
A type ID check
A value check
A tensor check
| https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
A value check
A tensor check
This becomes the heart of the code our "check_fn", which in turn is
evaluated the next time we encounter this code. It will then
check:
Is this code still valid?
If (1), Does "y" still have a type of "94367738391392"?
If (2), is "y" still 2?
If (3), letâs check on if tensor "x" changed in some specific ways.
If all of these are still true, then we can use the code cached
alongside this "check_fn".
Note:
For a deeper dive for how and where this happens you can read
"static PyCodeObject lookup(CacheEntry e, PyObject *f_locals) {"
of "_eval_frame.c".
If not, then, we can move on to recompiling the code anew, and storing
that in the cache alongside this code, and a whole new "check_fn",
again to be checked on yet another subsequent frame.
There are lots of other such functions on "GuardBuilder" which get
coalesced into, at times massive, strings which then get evaluated as
Python code and stored into "check_fn". The example above illustrates | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
of a simple case. To understand this functionality better, read the
other functions on "GuardBuilder", or better yet, dump the "code"
variable in "compile_check_fn" to see what is getting produced,
especially on larger, real models.
Summary
In this section, we have reviewed:
The role of ".valid" and invalidation around weak references (and
potentially soon to be NN Moduleinvalidations).
How the C++ side of guard functions ("checktype_id",
"_check_tensors", etc) operate
What happens when guards fail.
What happens if we produce invalid guard code.
We covered how user provided code wrapped in a TorchDynamo context
goes on to get traced and tracked internally, organized into
"VariableTracker"s "Source"s and subsequently "Guard"s, and how those
"Guards" in turn guide cache entry selection and invalidation when
handing Python code. | https://pytorch.org/docs/stable/dynamo/guards-overview.html | pytorch docs |
DDP Communication Hooks
DDP communication hook is a generic interface to control how to
communicate gradients across workers by overriding the vanilla
allreduce in DistributedDataParallel. A few built-in communication
hooks are provided, and users can easily apply any of these hooks to
optimize communication. Besides, the hook interface can also support
user-defined communication strategies for more advanced use cases.
How to Use a Communication Hook?
To use a communication hook, the user just needs to let the DDP model
register the hook before the training loop as below.
"torch.nn.parallel.DistributedDataParallel.register_comm_hook()"
What Does a Communication Hook Operate On?
A communication hook provides a flexible way to allreduce gradients.
Therefore, it mainly operates on the gradients on each replica before
allreduce, which are bucketized to increase the overlap between | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
communication and computation. Particularly,
"torch.distributed.GradBucket" represents a bucket of gradient tensors
to be allreduced.
class torch.distributed.GradBucket
This class mainly passes a flattened gradient tensor (returned by
"buffer()") to DDP communication hook. This tensor can be further
decomposed into a list of per-parameter tensors within this bucket
(returned by "get_per_parameter_tensors()") to apply layer-wise
operations.
torch.distributed.GradBucket.index(self: torch._C._distributed_c10d.GradBucket) -> int
Warning:
Since the buckets are rebuilt after the first iteration, should
not rely on the indices at the beginning of training.
Returns:
The index of a bucket that stores gradients of a few contiguous
layers. All the gradients are bucketized.
torch.distributed.GradBucket.buffer(self: torch._C._distributed_c10d.GradBucket) -> torch.Tensor
Returns:
A flattened 1D "torch.Tensor" buffer, which can be further | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
decomposed into a list of per-parameter tensors within this
bucket.
torch.distributed.GradBucket.gradients(self: torch._C._distributed_c10d.GradBucket) -> List[torch.Tensor]
Returns:
A list of "torch.Tensor". Each tensor in the list corresponds to
a gradient.
torch.distributed.GradBucket.is_last(self: torch._C._distributed_c10d.GradBucket) -> bool
Returns:
Whether this bucket is the last bucket to allreduce in an
iteration. This also means that this bucket corresponds to the
first few layers in the forward pass.
torch.distributed.GradBucket.set_buffer(self: torch._C._distributed_c10d.GradBucket, buffer: torch.Tensor) -> None
Replaces the tensor in the bucket with the input tensor buffer.
torch.distributed.GradBucket.parameters(self: torch._C._distributed_c10d.GradBucket) -> List[torch.Tensor]
Returns:
A list of "torch.Tensor". Each tensor in the list corresponds to
a model parameter.
Default Communication Hooks | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
Default Communication Hooks
Default communication hooks are simple stateless hooks, so the
input state in "register_comm_hook" is either a process group or
"None". The input "bucket" is a "torch.distributed.GradBucket" object.
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.allreduce_hook(process_group, bucket)
This DDP communication hook just calls "allreduce" using
"GradBucket" tensors. Once gradient tensors are aggregated across
all workers, its "then" callback takes the mean and returns the
result. If user registers this hook, DDP results is expected to be
same as the case where no hook was registered. Hence, this won't
change behavior of DDP and user can use this as a reference or
modify this hook to log useful information or any other purposes
while unaffecting DDP behavior.
Example::
>>> ddp_model.register_comm_hook(process_group, allreduce_hook)
Return type:
Future[Tensor] | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
Return type:
Future[Tensor]
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook(process_group, bucket)
This DDP communication hook implements a simple gradient
compression approach that casts "GradBucket" tensor to half-
precision floating-point format ("torch.float16") and then divides
it by the process group size. It allreduces those "float16"
gradient tensors. Once compressed gradient tensors are allreduced,
the chained callback "decompress" casts it back to the input data
type (such as "float32").
Example::
>>> ddp_model.register_comm_hook(process_group, fp16_compress_hook)
Return type:
Future[Tensor]
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.bf16_compress_hook(process_group, bucket)
Warning: This API is experimental, and it requires NCCL version
later than 2.9.6.
This DDP communication hook implements a simple gradient
compression approach that casts "GradBucket" tensor to half- | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
precision Brain floating point format ("torch.bfloat16") and then
divides it by the process group size. It allreduces those
"bfloat16" gradient tensors. Once compressed gradient tensors are
allreduced, the chained callback "decompress" casts it back to the
input data type (such as "float32").
Example::
>>> ddp_model.register_comm_hook(process_group, bf16_compress_hook)
Return type:
Future[Tensor]
Additionally, a communication hook wrapper is provided to support
"fp16_compress_hook()" or "bf16_compress_hook()" as a wrapper, which
can be combined with other communication hooks.
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_wrapper(hook)
This wrapper casts the input gradient tensor of a given DDP
communication hook to half-precision floating point format
("torch.float16"), and casts the resulting tensor of the given hook
back to the input data type, such as "float32".
Therefore, "fp16_compress_hook" is equivalent to | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
"fp16_compress_wrapper(allreduce_hook)".
Example::
>>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)
>>> ddp_model.register_comm_hook(state, fp16_compress_wrapper(powerSGD_hook))
Return type:
Callable[[Any, GradBucket], Future[Tensor]]
torch.distributed.algorithms.ddp_comm_hooks.default_hooks.bf16_compress_wrapper(hook)
Warning: This API is experimental, and it requires NCCL version
later than 2.9.6.
This wrapper casts the input gradient tensor of a given DDP
communication hook to half-precision Brain floating point format
https://en.wikipedia.org/wiki/Bfloat16_floating-point_format _
(``torch.bfloat16), and casts the resulting tensor of the given
hook back to the input data type, such as "float32".
Therefore, "bf16_compress_hook" is equivalent to
"bf16_compress_wrapper(allreduce_hook)".
Example:: | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
Example::
>>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)
>>> ddp_model.register_comm_hook(state, bf16_compress_wrapper(powerSGD_hook))
Return type:
Callable[[Any, GradBucket], Future[Tensor]]
PowerSGD Communication Hook
PowerSGD (Vogels et al., NeurIPS 2019) is a gradient compression
algorithm, which can provide very high compression rates and
accelerate bandwidth-bound distributed training. This algorithm needs
to maintain both some hyperparameters and the internal state.
Therefore, PowerSGD communication hook is a stateful hook, and the
user needs to provide a state object defined as below.
PowerSGD State | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
PowerSGD State
class torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState(process_group, matrix_approximation_rank=1, start_powerSGD_iter=1000, min_compression_rate=2, use_error_feedback=True, warm_start=True, orthogonalization_epsilon=0, random_seed=0, compression_stats_logging_frequency=10000, batch_tensors_with_same_shape=False)
Stores both the algorithm's hyperparameters and the internal state
for all the gradients during the training. Particularly,
"matrix_approximation_rank" and "start_powerSGD_iter" are the main
hyperparameters that should be tuned by the user. For performance,
we suggest to keep binary hyperparameters "use_error_feedback" and
"warm_start" on.
"matrix_approximation_rank" controls the size of compressed low-
rank tensors, which determines the compression rate. The lower
the rank, the stronger the compression. 1.1. If "matrix_approximation_rank" is too low, the full
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
model quality will need more training steps to reach or will
never reach and yield loss in accuracy.
1.2. The increase of "matrix_approximation_rank" can
substantially increase the computation costs of the
compression, and the accuracy may not be further improved
beyond a certain "matrix_approximation_rank" threshold.
To tune "matrix_approximation_rank", we suggest to start from 1 and
increase by factors of 2 (like an exponential grid search, 1, 2, 4,
...), until a satisfactory accuracy is reached. Typically only a
small value 1-4 is used. For some NLP tasks (as shown in Appendix D
of the original paper), this value has been increased to 32.
"start_powerSGD_iter" defers PowerSGD compression until step
"start_powerSGD_iter", and vanilla allreduce runs prior to step
"start_powerSGD_iter". This hybrid scheme of **vanilla allreduce
PowerSGD** can effectively improve the accuracy, even a
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
relatively small "matrix_approximation_rank" is used. This is
because that, the beginning of training phase is usually very
sensitive to inaccurate gradients, and compressing gradients too
early may make the training quickly take a suboptimal
trajectory, which can result in an irrecoverable impact on the
accuracy.
To tune "start_powerSGD_iter", we suggest to start with 10% of
total training steps, and increase it until a satisfactory accuracy
is reached. If there is a warm-up stage in the training,
"start_powerSGD_iter" typically should be no less than the number
of warm-up steps.
"min_compression_rate" is the minimum compression rate required
when a layer is compressed. Due to the computation overheads
incurred by the compression, a tensor is worth compressing only
if there can be sufficient saving in bandwidth, where "(num_rows
num_cols) * matrix_approximation_rank * min_compression_rate <
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
num_rows * num_cols". If the specified compression rate
threshold cannot be satisfied, the tensor will be directly
allreduced without compression.
Compression statistics are logged every
"compression_stats_logging_frequency" iterations once PowerSGD
compression starts.
"orthogonalization_epsilon" can be a very small value (e.g.,
1e-8) added to every normalized matrix column in
orthogonalization step, to prevent div-by-zero error if any
column has all 0s. If this can already be prevented (e.g., by
batch normalization), an epsilon of 0 is recommended for
accuracy.
"batch_tensors_with_same_shape" controls whether to compress and
decompress tensors with same shape in a batched operation to
achieve higher parallelism. Note that you should also increase
the bucket size (i.e., "bucket_cap_mb" arg in DDP constructor)
to make more same-shaped tensors appear in the same bucket,
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
however this may reduce the overlap between computation and
communication, and increase the memory footprint due to stacking
the tensors of the same shape. Set to "True" if the compression
/ decompression computation is a bottleneck.
Warning:
If error feedback or warm-up is enabled, the minimum value of
"start_powerSGD_iter" allowed in DDP is 2. This is because there
is another internal optimization that rebuilds buckets at
iteration 1 in DDP, and this can conflict with any tensor
memorized before the rebuild process.
PowerSGD Hooks
Warning:
PowerSGD typically requires extra memory of the same size as the
model's gradients to enable error feedback, which can compensate for
biased compressed communication and improve accuracy.
Warning:
PowerSGD hooks may conflict with Apex automatic mixed precision
package. Please use PyTorch native automatic mixed precision package
instead. | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
instead.
torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook(state, bucket)
This DDP communication hook implements PowerSGD gradient
compression algorithm described in the paper. Once gradient tensors
are aggregated across all workers, this hook applies compression as
follows:
Views the input flattened 1D gradient tensor as a list of per-
parameter tensors, and divides all the tensors into two groups:
1.1 The tensors that should be compressed before allreduce,
because the compression can give enough saving in bandwidth.
1.2 Rest of the tensors will be directly allreduced without
compression, including all the vector tensors (for biases).
Handles uncompressed tensors:
2.1. Allocate contiguous memory for those uncompressed
tensors, and allreduces all the uncompressed tensors as a
batch, without compression;
2.2. Copies the individual uncompressed tensors from the
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
contiguous memory back to the input tensor.
Handles the tensors that should be compressed by PowerSGD
compression: 3.1. For each tensor M, creates two low-rank tensors P and Q
for decomposing M, such that M = PQ^T, where Q is initialized
from a standard normal distribution and orthogonalized;
3.2. Computes each P in Ps, which is equal to MQ;
3.3. Allreduces Ps as a batch;
3.4. Orthogonalizes each P in Ps;
3.5. Computes each Q in Qs, which is approximately equal to
M^TP;
3.6. Allreduces Qs as a batch;
3.7. Computes each M among all the compressed tensors, which
is approximately equal to PQ^T.
Note that this communication hook enforces vanilla allreduce for
the first "state.start_powerSGD_iter" iterations. This not only
gives the user more control over the tradeoff between speedup and
accuracy, but also helps abstract away some complexity of the | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
internal optimization of DDP for future communication hook
developers.
Parameters:
* state (PowerSGDState) -- State information to configure
the compression rate and support error feedback, warm start,
etc. To tune the compression configs, mainly need to tune
"matrix_approximation_rank", "start_powerSGD_iter" and
"min_compression_rate".
* **bucket** (*dist.GradBucket*) -- Bucket that stores a 1D
flattened gradient tensor that batches multiple per-variable
tensors. Note that since DDP comm hook only supports single
process single device mode, only exactly one tensor is stored
in this bucket.
Returns:
Future handler of the communication, which updates the gradients
in place.
Return type:
Future[Tensor]
Example::
>>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1,
start_powerSGD_iter=10, min_compression_rate=0.5) | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
ddp_model.register_comm_hook(state, powerSGD_hook)
torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.batched_powerSGD_hook(state, bucket)
This DDP communication hook implements a simplified PowerSGD
gradient compression algorithm described in the paper. This variant
does not compress the gradients layer by layer, but instead
compresses the flattened input tensor that batches all the
gradients. Therefore, it is faster than "powerSGD_hook()", but
usually results in a much lower accuracy, unless
"matrix_approximation_rank" is 1.
Warning:
Increasing "matrix_approximation_rank" here may not necessarily
increase the accuracy, because batching per-parameter tensors
without column/row alignment can destroy low-rank structure.
Therefore, the user should always consider "powerSGD_hook()"
first, and only consider this variant when a satisfactory
accuracy can be achieved when "matrix_approximation_rank" is 1.
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
Once gradient tensors are aggregated across all workers, this hook
applies compression as follows:
Views the input flattened 1D gradient tensor as a square-shaped
tensor M with 0 paddings;
Creates two low-rank tensors P and Q for decomposing M, such
that M = PQ^T, where Q is initialized from a standard normal
distribution and orthogonalized;
Computes P, which is equal to MQ;
Allreduces P;
Orthogonalizes P;
Computes Q, which is approximately equal to M^TP;
Allreduces Q;
Computes M, which is approximately equal to PQ^T.
Truncates the input tensor to the original length.
Note that this communication hook enforces vanilla allreduce for
the first "state.start_powerSGD_iter" iterations. This not only
gives the user more control over the tradeoff between speedup and
accuracy, but also helps abstract away some complexity of the
internal optimization of DDP for future communication hook
developers. | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
developers.
Parameters:
* state (PowerSGDState) -- State information to configure
the compression rate and support error feedback, warm start,
etc. To tune the compression configs, mainly need to tune
"matrix_approximation_rank" and "start_powerSGD_iter".
* **bucket** (*dist.GradBucket*) -- Bucket that stores a 1D
flattened gradient tensor that batches multiple per-variable
tensors. Note that since DDP comm hook only supports single
process single device mode, only exactly one tensor is stored
in this bucket.
Returns:
Future handler of the communication, which updates the gradients
in place.
Return type:
Future[Tensor]
Example::
>>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1)
>>> ddp_model.register_comm_hook(state, batched_powerSGD_hook)
Debugging Communication Hooks | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
=============================
As the name implies, debugging communication hooks are only used
for debugging and performance optimization purpose.
Warning:
Debugging communication hooks do not necessarily output the correct
results.
torch.distributed.algorithms.ddp_comm_hooks.debugging_hooks.noop_hook(_, bucket)
This DDP communication hook returns a future that wraps the input,
so it is a noop that does not incur any communication overheads.
This hook should only be used for headroom analysis of
allreduce optimization, instead of the normal gradient
synchronization. For example, if only less than 10% speedup of
training time can be observed after this hook is registered, it
usually implies that allreduce is not a performance bottleneck for
this case. Such instrumentation can be particularly useful if GPU
traces cannot be easily retrieved or the trace analysis is
complicated some factors such as the overlap between allreduce and | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
computation or the desynchronization across ranks.
Example::
>>> ddp_model.register_comm_hook(None, noop_hook)
Return type:
Future[Tensor]
Checkpointing of Communication Hooks
A stateful communication hook can be saved as a part of model
checkpointing to enable trainer restarts. To make a hook serializable,
"setstate" and "getstate" should be defined.
Warning:
"getstate" should exclude non-serializable attributes from a
returned dictionary.
Warning:
"setstate" should properly initialize non-serializable
attributes, excluded from a provided "state".
"PowerSGDState" has "setstate" and "getstate" implemented and
can be used as a reference.
class torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState
getstate()
Returns a "Dict[str, Any]" which will be pickled and saved.
"process_group" is not serializable and excluded from a returned
state.
setstate(state) | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
state.
setstate(state)
Takes a provided "state" and retrieves "PowerSGDState".
"process_group" is set to default.
Here is a simple, end-to-end example of saving and reloading PowerSGD
state and hook.
import os
import sys
import tempfile
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
from torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook as powerSGD
class SimpleModel(nn.Module):
def init(self):
super(SimpleModel, self).init()
self.fc1 = nn.Linear(24,24)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(24,12)
def forward(self, x):
return self.fc2(self.relu(self.fc1(x)))
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup(): | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
def cleanup():
dist.destroy_process_group()
def run_demo(demo_fn, world_size):
mp.spawn(
demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
def demo_serialization(rank, world_size):
setup(rank, world_size)
CHECKPOINT = tempfile.gettempdir() + "/checkpoint.pt"
model = SimpleModel().to(rank)
ddp_model = DistributedDataParallel(model, device_ids=[rank])
powersgd_hook = powerSGD.powerSGD_hook
powersgd_state = powerSGD.PowerSGDState(process_group=None)
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
ddp_model.register_comm_hook(powersgd_state, powersgd_hook)
state = {
'state_dict': ddp_model.state_dict(),
'comm_hook': hook,
'comm_hook_state': hook_state}
if rank == 0:
torch.save(state, CHECKPOINT)
dist.barrier()
map_location = {'cuda:%d' % 0: 'cuda:%d' % rank}
| https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
checkpoint = torch.load(CHECKPOINT, map_location=map_location)
ddp_model.load_state_dict(checkpoint['state_dict'])
powersgd_hook = checkpoint['comm_hook']
powersgd_state = checkpoint['comm_hook_state']
ddp_model.register_comm_hook(powersgd_state, powersgd_hook)
if rank == 0:
os.remove(CHECKPOINT)
cleanup()
if name == "main":
n_gpus = torch.cuda.device_count()
assert n_gpus >= 2, f"Requires at least 2 GPUs to run, but got {n_gpus}"
world_size = n_gpus
run_demo(demo_serialization, world_size)
Acknowledgements
Many thanks to PowerSGD paper author Thijs Vogels for the code
review on PowerSGD communication hook, as well as the comparison
experiments, which show that the performance of PowerSGD communication
hook is on par with the implementation in the original paper. | https://pytorch.org/docs/stable/ddp_comm_hooks.html | pytorch docs |
Pipeline Parallelism
Pipeline parallelism was original introduced in the Gpipe paper and
is an efficient technique to train large models on multiple GPUs.
Warning:
Pipeline Parallelism is experimental and subject to change.
Model Parallelism using multiple GPUs
Typically for large models which don't fit on a single GPU, model
parallelism is employed where certain parts of the model are placed on
different GPUs. Although, if this is done naively for sequential
models, the training process suffers from GPU under utilization since
only one GPU is active at one time as shown in the figure below:
[image]The figure represents a model with 4 layers placed on 4
different GPUs (vertical axis). The horizontal axis represents
training this model through time demonstrating that only 1 GPU is
utilized at a time (image source).
Pipelined Execution
To alleviate this problem, pipeline parallelism splits the input | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
minibatch into multiple microbatches and pipelines the execution of
these microbatches across multiple GPUs. This is outlined in the
figure below:
[image]The figure represents a model with 4 layers placed on 4
different GPUs (vertical axis). The horizontal axis represents
training this model through time demonstrating that the GPUs are
utilized much more efficiently. However, there still exists a
bubble (as demonstrated in the figure) where certain GPUs are not
utilized. (image source).
Pipe APIs in PyTorch
class torch.distributed.pipeline.sync.Pipe(module, chunks=1, checkpoint='except_last', deferred_batch_norm=False)
Wraps an arbitrary "nn.Sequential" module to train on using
synchronous pipeline parallelism. If the module requires lots of
memory and doesn't fit on a single GPU, pipeline parallelism is a
useful technique to employ for training.
The implementation is based on the torchgpipe paper. | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
Pipe combines pipeline parallelism with checkpointing to reduce
peak memory required to train while minimizing device under-
utilization.
You should place all the modules on the appropriate devices and
wrap them into an "nn.Sequential" module defining the desired order
of execution. If a module does not contain any parameters/buffers,
it is assumed this module should be executed on CPU and appropriate
input tensors to the module are moved to CPU before execution. This
behavior can be overridden by the "WithDevice" wrapper which can be
used to explicitly specify which device a module should run on.
Parameters:
* module ("nn.Sequential") -- sequential module to be
parallelized using pipelining. Each module in the sequence has
to have all of its parameters on a single device. Each module
in the sequence has to either be an nn.Module or
"nn.Sequential" (to combine multiple sequential modules on a
single device) | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
single device)
* **chunks** (*int*) -- number of micro-batches (default: "1")
* **checkpoint** (*str*) -- when to enable checkpointing, one of
"'always'", "'except_last'", or "'never'" (default:
"'except_last'"). "'never'" disables checkpointing completely,
"'except_last'" enables checkpointing for all micro-batches
except the last one and "'always'" enables checkpointing for
all micro-batches.
* **deferred_batch_norm** (*bool*) -- whether to use deferred
"BatchNorm" moving statistics (default: "False"). If set to
"True", we track statistics across multiple micro-batches to
update the running statistics per mini-batch.
Raises:
* TypeError -- the module is not a "nn.Sequential".
* **ValueError** -- invalid arguments
Example::
Pipeline of two FC layers across GPUs 0 and 1.
>>> # Need to initialize RPC framework first.
>>> os.environ['MASTER_ADDR'] = 'localhost'
| https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
os.environ['MASTER_ADDR'] = 'localhost'
>>> os.environ['MASTER_PORT'] = '29500'
>>> torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1)
>>>
>>> # Build pipe.
>>> fc1 = nn.Linear(16, 8).cuda(0)
>>> fc2 = nn.Linear(8, 4).cuda(1)
>>> model = nn.Sequential(fc1, fc2)
>>> model = Pipe(model, chunks=8)
>>> input = torch.rand(16, 16).cuda(0)
>>> output_rref = model(input)
Note:
You can wrap a "Pipe" model with
"torch.nn.parallel.DistributedDataParallel" only when the
checkpoint parameter of "Pipe" is "'never'".
Note:
"Pipe" only supports intra-node pipelining currently, but will be
expanded to support inter-node pipelining in the future. The
forward function returns an "RRef" to allow for inter-node
pipelining in the future, where the output might be on a remote
host. For intra-node pipelinining you can use "local_value()" to
retrieve the output locally.
Warning: | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
retrieve the output locally.
Warning:
"Pipe" is experimental and subject to change.
forward(*inputs)
Processes a single input mini-batch through the pipe and returns
an "RRef" pointing to the output. "Pipe" is a fairly transparent
module wrapper. It doesn't modify the input and output signature
of the underlying module. But there's type restriction. Input
and output have to contain at least one tensor. This restriction
is applied at partition boundaries too.
The sequence of inputs are fed into the first stage of the
pipeline as "*inputs". As a result the positional args for this
function should match the positional args for the first stage of
the pipeline. The same condition applies for output of one stage
of the pipeline which is the input for the next stage.
The input tensor is split into multiple micro-batches based on
the "chunks" parameter used to initialize "Pipe". The batch size
| https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
is assumed to be the first dimension of the tensor and if the
batch size is less than "chunks", the number of micro-batches is
equal to the batch size.
Only tensors are split into multiple micro-batches, non-Tensor
inputs are just replicated as-is in each micro-batch. For non-
Tensor outputs in the last stage of the pipeline, they are
aggregated as a "List" and returned the user. For example, if
you have 2 micro-batches returning the integer 5, the user would
receive the consolidated output of *[5, 5]*
All the input tensors need to be on the same device as the first
partition of the pipeline.
If a tensor is wrapped with the "NoChunk" wrapper, the tensor is
not split across micro-batches and is replicated as-is similar
to non-tensors.
Parameters:
**inputs** -- input mini-batch
Returns:
"RRef" to the output of the mini-batch
Raises:
| https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
Raises:
TypeError -- input doesn't contain at least one tensor
Return type:
*RRef*
Skip connections
Certain models like ResNeXt are not completely sequential and have
skip connections between layers. Naively implementing as part of
pipeline parallelism would imply that we need to copy outputs for
certain layers through multiple GPUs till we eventually reach the GPU
where the layer for the skip connection resides. To avoid this copy
overhead, we provide APIs below to stash and pop Tensors in different
layers of the model.
torch.distributed.pipeline.sync.skip.skippable.skippable(stash=(), pop=())
The decorator to define a "nn.Module" with skip connections.
Decorated modules are called "skippable". This functionality works
perfectly fine even when the module is not wrapped by "Pipe".
Each skip tensor is managed by its name. Before manipulating skip
tensors, a skippable module must statically declare the names for | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
skip tensors by stash and/or pop parameters. Skip tensors with
pre-declared name can be stashed by "yield stash(name, tensor)" or
popped by "tensor = yield pop(name)".
Here is an example with three layers. A skip tensor named "1to3" is
stashed and popped at the first and last layer, respectively:
@skippable(stash=['1to3'])
class Layer1(nn.Module):
def forward(self, input):
yield stash('1to3', input)
return f1(input)
class Layer2(nn.Module):
def forward(self, input):
return f2(input)
@skippable(pop=['1to3'])
class Layer3(nn.Module):
def forward(self, input):
skip_1to3 = yield pop('1to3')
return f3(input) + skip_1to3
model = nn.Sequential(Layer1(), Layer2(), Layer3())
One skippable module can stash or pop multiple skip tensors:
@skippable(stash=['alice', 'bob'], pop=['carol'])
class StashStashPop(nn.Module):
| https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
class StashStashPop(nn.Module):
def forward(self, input):
yield stash('alice', f_alice(input))
yield stash('bob', f_bob(input))
carol = yield pop('carol')
return input + carol
Every skip tensor must be associated with exactly one pair of
stash and pop. "Pipe" checks this restriction automatically
when wrapping a module. You can also check the restriction by
"verify_skippables()" without "Pipe".
Return type:
Callable[[Type[Module]], Type[Skippable]]
class torch.distributed.pipeline.sync.skip.skippable.stash(name, tensor)
The command to stash a skip tensor.
def forward(self, input):
yield stash('name', input)
return f(input)
Parameters:
* name (str) -- name of skip tensor
* **input** (*torch.Tensor** or **None*) -- tensor to pass to
the skip connection
class torch.distributed.pipeline.sync.skip.skippable.pop(name) | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
The command to pop a skip tensor.
def forward(self, input):
skip = yield pop('name')
return f(input) + skip
Parameters:
name (str) -- name of skip tensor
Returns:
the skip tensor previously stashed by another layer under the
same name
Return type:
None
torch.distributed.pipeline.sync.skip.skippable.verify_skippables(module)
Verifies if the underlying skippable modules satisfy integrity.
Every skip tensor must have only one pair of stash and pop. If
there are one or more unmatched pairs, it will raise "TypeError"
with the detailed messages.
Here are a few failure cases. "verify_skippables()" will report
failure for these cases:
# Layer1 stashes "1to3".
# Layer3 pops "1to3".
nn.Sequential(Layer1(), Layer2())
# âââââ ?
nn.Sequential(Layer2(), Layer3())
# ? âââââ
nn.Sequential(Layer1(), Layer2(), Layer3(), Layer3())
| https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
âââââââââââââââââââââ ^^^^^^
nn.Sequential(Layer1(), Layer1(), Layer2(), Layer3())
# ^^^^^^ âââââââââââââââââââââ
To use the same name for multiple skip tensors, they must be
isolated by different namespaces. See "isolate()".
Raises:
TypeError -- one or more pairs of stash and pop are not
matched.
Tutorials
The following tutorials give a good overview of how to use the "Pipe"
API to train your models with the rest of the components that PyTorch
provides:
Training Transformer models using Pipeline Parallelism
Training Transformer models using Distributed Data Parallel and
Pipeline Parallelism
Acknowledgements
The implementation for pipeline parallelism is based on fairscale's
pipe implementation and torchgpipe. We would like to thank both teams | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
for their contributions and guidance towards bringing pipeline
parallelism into PyTorch. | https://pytorch.org/docs/stable/pipeline.html | pytorch docs |
Distributed Checkpoint
| https://pytorch.org/docs/stable/distributed.checkpoint.html | pytorch docs |
torch.backends
torch.backends controls the behavior of various backends that
PyTorch supports.
These backends include:
"torch.backends.cuda"
"torch.backends.cudnn"
"torch.backends.mps"
"torch.backends.mkl"
"torch.backends.mkldnn"
"torch.backends.openmp"
"torch.backends.opt_einsum"
"torch.backends.xeon"
torch.backends.cuda
torch.backends.cuda.is_built()
Returns whether PyTorch is built with CUDA support. Note that this
doesn't necessarily mean CUDA is available; just that if this
PyTorch binary were run a machine with working CUDA drivers and
devices, we would be able to use it.
torch.backends.cuda.matmul.allow_tf32
A "bool" that controls whether TensorFloat-32 tensor cores may be
used in matrix multiplications on Ampere or newer GPUs. See
TensorFloat-32(TF32) on Ampere devices.
torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction
A "bool" that controls whether reduced precision reductions (e.g., | https://pytorch.org/docs/stable/backends.html | pytorch docs |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.