( tp_degree: int = None pp_degree: int = None num_micro_batches: int = None gradient_clipping: float = None sequence_parallelism: bool = None recompute_activations: bool = None use_distributed_optimizer: bool = None pipeline_model_parallel_split_rank: int = None num_layers_per_virtual_pipeline_stage: int = None is_train_batch_min: str = True train_iters: int = None train_samples: int = None weight_decay_incr_style: str = 'constant' start_weight_decay: float = None end_weight_decay: float = None lr_decay_style: str = 'linear' lr_decay_iters: int = None lr_decay_samples: int = None lr_warmup_iters: int = None lr_warmup_samples: int = None lr_warmup_fraction: float = None min_lr: float = 0 consumed_samples: typing.List[int] = None no_wd_decay_cond: typing.Optional[typing.Callable] = None scale_lr_cond: typing.Optional[typing.Callable] = None lr_mult: float = 1.0 megatron_dataset_flag: bool = False seq_length: int = None encoder_seq_length: int = None decoder_seq_length: int = None tensorboard_dir: str = None set_all_logging_options: bool = False eval_iters: int = 100 eval_interval: int = 1000 return_logits: bool = False custom_train_step_class: typing.Optional[typing.Any] = None custom_train_step_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None custom_model_provider_function: typing.Optional[typing.Callable] = None custom_prepare_model_function: typing.Optional[typing.Callable] = None custom_megatron_datasets_provider_function: typing.Optional[typing.Callable] = None custom_get_batch_function: typing.Optional[typing.Callable] = None custom_loss_function: typing.Optional[typing.Callable] = None other_megatron_args: typing.Optional[typing.Dict[str, typing.Any]] = None )
Parameters
int
, defaults to None
) —
Tensor parallelism degree. int
, defaults to None
) —
Pipeline parallelism degree. int
, defaults to None
) —
Number of micro-batches. float
, defaults to None
) —
Gradient clipping value based on global L2 Norm (0 to disable). bool
, defaults to None
) —
Enable sequence parallelism. bool
, defaults to None
) —
Enable selective activation recomputation. bool
, defaults to None
) —
Enable distributed optimizer. int
, defaults to None
) —
Rank where encoder and decoder should be split. int
, defaults to None
) —
Number of layers per virtual pipeline stage. str
, defaults to True
) —
If both tran & eval dataloaders are specified, this will decide the micro_batch_size
. int
, defaults to None
) —
Total number of samples to train over all training runs. Note that either train-iters or train-samples
should be provided when using MegatronLMDummyScheduler
. int
, defaults to None
) —
Total number of samples to train over all training runs. Note that either train-iters or train-samples
should be provided when using MegatronLMDummyScheduler
. str
, defaults to 'constant'
) —
Weight decay increment function. choices=[“constant”, “linear”, “cosine”]. float
, defaults to None
) —
Initial weight decay coefficient for L2 regularization. float
, defaults to None
) —
End of run weight decay coefficient for L2 regularization. str
, defaults to 'linear'
) —
Learning rate decay function. choices=[‘constant’, ‘linear’, ‘cosine’]. int
, defaults to None
) —
Number of iterations for learning rate decay. If None defaults to train_iters
. int
, defaults to None
) —
Number of samples for learning rate decay. If None defaults to train_samples
. int
, defaults to None
) —
Number of iterations to linearly warmup learning rate over. int
, defaults to None
) —
Number of samples to linearly warmup learning rate over. float
, defaults to None
) —
Fraction of lr-warmup-(iters/samples) to linearly warmup learning rate over. float
, defaults to 0
) —
Minumum value for learning rate. The scheduler clip values below this threshold. List
, defaults to None
) —
Number of samples consumed in the same order as the dataloaders to accelerator.prepare
call. Optional
, defaults to None
) —
Condition to disable weight decay. Optional
, defaults to None
) —
Condition to scale learning rate. float
, defaults to 1.0
) —
Learning rate multiplier. bool
, defaults to False
) —
Whether the format of dataset follows Megatron-LM Indexed/Cached/MemoryMapped format. int
, defaults to None
) —
Maximum sequence length to process. int
, defaults to None
) —
Maximum sequence length to process for the encoder. int
, defaults to None
) —
Maximum sequence length to process for the decoder. str
, defaults to None
) —
Path to save tensorboard logs. bool
, defaults to False
) —
Whether to set all logging options. int
, defaults to 100
) —
Number of iterations to run for evaluation validation/test for. int
, defaults to 1000
) —
Interval between running evaluation on validation set. bool
, defaults to False
) —
Whether to return logits from the model. Optional
, defaults to None
) —
Custom train step class. Optional
, defaults to None
) —
Custom train step kwargs. Optional
, defaults to None
) —
Custom model provider function. Optional
, defaults to None
) —
Custom prepare model function. Optional
, defaults to None
) —
Custom megatron train_valid_test datasets provider function. Optional
, defaults to None
) —
Custom get batch function. Optional
, defaults to None
) —
Custom loss function. Optional
, defaults to None
) —
Other Megatron-LM arguments. Please refer Megatron-LM. Plugin for Megatron-LM to enable tensor, pipeline, sequence and data parallelism. Also to enable selective activation recomputation and optimized fused kernels.
( optimizer total_num_steps = None warmup_num_steps = 0 **kwargs )
Dummy scheduler presents model parameters or param groups, this is primarily used to follow conventional training loop when scheduler config is specified in the deepspeed config file.
( **dataset_kwargs )
Dummy dataloader presents model parameters or param groups, this is primarily used to follow conventional training
Abstract class for batching, forward pass and loss handler.
( accelerator args )
GPT train step class.
( accelerator args )
Bert train step class.
( accelerator args )
T5 train step class.
( losses )
Average losses across data parallel group.