( state )
Raises
ValueError
ValueError
— If DeepSpeed was not enabled and this function is called.Returns the currently active DeepSpeedPlugin.
( hf_ds_config: Any = None gradient_accumulation_steps: int = None gradient_clipping: float = None zero_stage: int = None is_train_batch_min: bool = True offload_optimizer_device: str = None offload_param_device: str = None offload_optimizer_nvme_path: str = None offload_param_nvme_path: str = None zero3_init_flag: bool = None zero3_save_16bit_model: bool = None transformer_moe_cls_names: str = None enable_msamp: bool = None msamp_opt_level: Optional = None )
Parameters
Any
, defaults to None
) —
Path to DeepSpeed config file or dict or an object of class accelerate.utils.deepspeed.HfDeepSpeedConfig
. int
, defaults to None
) —
Number of steps to accumulate gradients before updating optimizer states. If not set, will use the value
from the Accelerator
directly. float
, defaults to None
) —
Enable gradient clipping with value. int
, defaults to None
) —
Possible options are 0, 1, 2, 3. Default will be taken from environment variable. bool
, defaults to True
) —
If both train & eval dataloaders are specified, this will decide the train_batch_size
. str
, defaults to None
) —
Possible options are none|cpu|nvme. Only applicable with ZeRO Stages 2 and 3. str
, defaults to None
) —
Possible options are none|cpu|nvme. Only applicable with ZeRO Stage 3. str
, defaults to None
) —
Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. str
, defaults to None
) —
Possible options are /nvme|/local_nvme. Only applicable with ZeRO Stage 3. bool
, defaults to None
) —
Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. bool
, defaults to None
) —
Flag to indicate whether to save 16-bit model. Only applicable with ZeRO Stage-3. str
, defaults to None
) —
Comma-separated list of Transformers MoE layer class names (case-sensitive). For example,
MixtralSparseMoeBlock
, Qwen2MoeSparseMoeBlock
, JetMoEAttention
, JetMoEBlock
, etc. bool
, defaults to None
) —
Flag to indicate whether to enable MS-AMP backend for FP8 training. Optional[Literal["O1", "O2"]]
, defaults to None
) —
Optimization level for MS-AMP (defaults to ‘O1’). Only applicable if enable_msamp
is True. Should be one
of [‘O1’ or ‘O2’]. This plugin is used to integrate DeepSpeed.
( prefix = '' mismatches = None config = None must_match = True **kwargs )
Process the DeepSpeed config with the values from the kwargs.
Sets the HfDeepSpeedWeakref to use the current deepspeed plugin configuration
( optimizer total_num_steps = None warmup_num_steps = 0 lr_scheduler_callable = None **kwargs )
Parameters
torch.optim.optimizer.Optimizer
) —
The optimizer to wrap. optimizer
. Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file.
( engine )
Internal wrapper for deepspeed.runtime.engine.DeepSpeedEngine. This is used to follow conventional training loop.
( optimizer )
Internal wrapper around a deepspeed optimizer.
( scheduler optimizers )
Internal wrapper around a deepspeed scheduler.
( params lr = 0.001 weight_decay = 0 **kwargs )
Dummy optimizer presents model parameters or param groups, this is primarily used to follow conventional training loop when optimizer config is specified in the deepspeed config file.