Enables RAM efficient loading of Hugging Face models for FSDP in the environment.
Disables RAM efficient loading of Hugging Face models for FSDP in the environment.
( checkpoint_dir: str output_path: str safe_serialization: bool = True remove_checkpoint_dir: bool = False )
Parameters
str) —
The directory containing the FSDP checkpoints (can be either the model or optimizer). str) —
The path to save the merged checkpoint. bool, optional, defaults to True) —
Whether to save the merged weights with safetensors (recommended). bool, optional, defaults to False) —
Whether to remove the checkpoint directory after merging. Merge the weights from sharded FSDP model checkpoints into a single combined checkpoint. Should be used if
SHARDED_STATE_DICT was used for the model. Weights will be saved to {output_path}/model.safetensors if
safe_serialization else pytorch_model.bin.
Note: this is a CPU-bound process.
( sharding_strategy: Union = None backward_prefetch: Union = None mixed_precision_policy: Union = None auto_wrap_policy: Union = None cpu_offload: Union = None ignored_modules: Optional = None state_dict_type: Union = None state_dict_config: Union = None optim_state_dict_config: Union = None limit_all_gathers: bool = True use_orig_params: bool = None param_init_fn: Optional = None sync_module_states: bool = None forward_prefetch: bool = None activation_checkpointing: bool = None cpu_ram_efficient_loading: bool = None transformer_cls_names_to_wrap: Optional = None min_num_params: Optional = None )
Parameters
Union[str, torch.distributed.fsdp.ShardingStrategy], defaults to 'FULL_SHARD') —
Sharding strategy to use. Should be either a str or an instance of
torch.distributed.fsdp.fully_sharded_data_parallel.ShardingStrategy. Union[str, torch.distributed.fsdp.BackwardPrefetch], defaults to 'NO_PREFETCH') —
Backward prefetch strategy to use. Should be either a str or an instance of
torch.distributed.fsdp.fully_sharded_data_parallel.BackwardPrefetch. Optional[Union[dict, torch.distributed.fsdp.MixedPrecision]], defaults to None) —
A config to enable mixed precision training with FullyShardedDataParallel. If passing in a dict, it
should have the following keys: param_dtype, reduce_dtype, and buffer_dtype. Optional(Union[Callable, Literal["transformer_based_wrap", "size_based_wrap", "no_wrap"]]), defaults to NO_WRAP) -- A callable or string specifying a policy to recursively wrap layers with FSDP. If a string, it must be one of transformer_based_wrap, size_based_wrap, or no_wrap. See torch.distributed.fsdp.wrap.size_based_wrap_policy` for a direction on what it should look like. Union[bool, torch.distributed.fsdp.CPUOffload], defaults to False) —
Whether to offload parameters to CPU. Should be either a bool or an instance of
torch.distributed.fsdp.fully_sharded_data_parallel.CPUOffload. Optional[Iterable[torch.nn.Module]], defaults to None) —
A list of modules to ignore when wrapping with FSDP. Union[str, torch.distributed.fsdp.StateDictType], defaults to 'FULL_STATE_DICT') —
State dict type to use. If a string, it must be one of full_state_dict, local_state_dict, or
sharded_state_dict. Optional[Union[torch.distributed.fsdp.FullStateDictConfig, torch.distributed.fsdp.ShardedStateDictConfig], defaults to None) —
State dict config to use. Is determined based on the state_dict_type if not passed in. Optional[Union[torch.distributed.fsdp.FullOptimStateDictConfig, torch.distributed.fsdp.ShardedOptimStateDictConfig], defaults to None) —
Optim state dict config to use. Is determined based on the state_dict_type if not passed in. bool, defaults to True) —
Whether to have FSDP explicitly synchronizes the CPU thread to prevent too many in-flight all-gathers. This
bool only affects the sharded strategies that schedule all-gathers. Enabling this can help lower the number
of CUDA malloc retries. bool, defaults to False) —
Whether to use the original parameters for the optimizer. Optional[Callable[[torch.nn.Module], None], defaults to None) —
A Callable[torch.nn.Module] -> None that specifies how modules that are currently on the meta device
should be initialized onto an actual device. Only applicable when sync_module_states is True. By
default is a lambda which calls to_empty on the module. bool, defaults to False) —
Whether each individually wrapped FSDP unit should broadcast module parameters from rank 0 to ensure they
are the same across all ranks after initialization. Defaults to False unless cpu_ram_efficient_loading
is True, then will be forcibly enabled. bool, defaults to False) —
Whether to have FSDP explicitly prefetches the next upcoming all-gather while executing in the forward
pass. only use with Static graphs. bool, defaults to False) —
A technique to reduce memory usage by clearing activations of certain layers and recomputing them during a
backward pass. Effectively, this trades extra computation time for reduced memory usage. bool, defaults to None) —
If True, only the first process loads the pretrained model checkoint while all other processes have empty
weights. Only applicable for Transformers. When using this, sync_module_states needs to be True. Optional[List[str]], defaults to None) —
A list of transformer layer class names to wrap. Only applicable when auto_wrap_policy is
transformer_based_wrap. Optional[int], defaults to None) —
The minimum number of parameters a module must have to be wrapped. Only applicable when auto_wrap_policy
is size_based_wrap. This plugin is used to enable fully sharded data parallelism.
Given model, creates an auto_wrap_policy baesd on the passed in policy and if we can use the
transformer_cls_to_wrap
Sets the mixed precision policy for FSDP
Set the state dict config based on the StateDictType.