This page contains the API reference documentation for learning rate optimizers included in timm
.
( model_or_params: typing.Union[torch.nn.modules.module.Module, typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] opt: str = 'sgd' lr: typing.Optional[float] = None weight_decay: float = 0.0 momentum: float = 0.9 foreach: typing.Optional[bool] = None filter_bias_and_bn: bool = True layer_decay: typing.Optional[float] = None param_group_fn: typing.Optional[typing.Callable[[torch.nn.modules.module.Module], typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]]]] = None **kwargs: typing.Any )
Parameters
Create an optimizer instance via timm registry.
Creates and configures an optimizer with appropriate parameter groups and settings. Supports automatic parameter group creation for weight decay and layer-wise learning rates, as well as custom parameter grouping.
Examples:
Basic usage with a model
optimizer = create_optimizer_v2(model, ‘adamw’, lr=1e-3)
SGD with momentum and weight decay
optimizer = create_optimizer_v2( … model, ‘sgd’, lr=0.1, momentum=0.9, weight_decay=1e-4 … )
Adam with layer-wise learning rate decay
optimizer = create_optimizer_v2( … model, ‘adam’, lr=1e-3, layer_decay=0.7 … )
Custom parameter groups
def group_fn(model): … return [ … {‘params’: model.backbone.parameters(), ‘lr’: 1e-4}, … {‘params’: model.head.parameters(), ‘lr’: 1e-3} … ] optimizer = create_optimizer_v2( … model, ‘sgd’, param_group_fn=group_fn … )
Note: Parameter group handling precedence:
( filter: typing.Union[str, typing.List[str]] = '' exclude_filters: typing.Optional[typing.List[str]] = None with_description: bool = False ) → If with_description is False
Parameters
Returns
If with_description is False
List of optimizer names as strings (e.g., [‘adam’, ‘adamw’, …]) If with_description is True: List of tuples of (name, description) (e.g., [(‘adam’, ‘Adaptive Moment…’), …])
List available optimizer names, optionally filtered.
List all registered optimizers, with optional filtering using wildcard patterns. Optimizers can be filtered using include and exclude patterns, and can optionally return descriptions with each optimizer name.
Examples:
list_optimizers() [‘adam’, ‘adamw’, ‘sgd’, …]
list_optimizers([‘la’, ‘nla’]) # List lamb & lars [‘lamb’, ‘lambc’, ‘larc’, ‘lars’, ‘nlarc’, ‘nlars’]
list_optimizers(’adam’, exclude_filters=[‘bnb’, ‘fused’]) # Exclude bnb & apex adam optimizers [‘adam’, ‘adamax’, ‘adamp’, ‘adamw’, ‘nadam’, ‘nadamw’, ‘radam’]
list_optimizers(with_description=True) # Get descriptions [(‘adabelief’, ‘Adapts learning rate based on gradient prediction error’), (‘adadelta’, ‘torch.optim Adadelta, Adapts learning rates based on running windows of gradients’), (‘adafactor’, ‘Memory-efficient implementation of Adam with factored gradients’), …]
( name: str bind_defaults: bool = True ) → If bind_defaults is False
Parameters
Returns
If bind_defaults is False
The optimizer class (e.g., torch.optim.Adam) If bind_defaults is True: A partial function with default arguments bound
Raises
ValueError
ValueError
— If optimizer name is not found in registryGet optimizer class by name with option to bind default arguments.
Retrieves the optimizer class or a partial function with default arguments bound. This allows direct instantiation of optimizers with their default configurations without going through the full factory.
Examples:
Get SGD with nesterov momentum default
SGD = get_optimizer_class(‘sgd’) # nesterov=True bound opt = SGD(model.parameters(), lr=0.1, momentum=0.9)
Get raw optimizer class
SGD = get_optimizer_class(‘sgd’) opt = SGD(model.parameters(), lr=1e-3, momentum=0.9)
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-16 weight_decay = 0 amsgrad = False decoupled_decay = True fixed_decay = False rectify = True degenerated_to_sgd = True )
Parameters
On the Convergence of Adam and Beyond
_
(default: False) Implements AdaBelief algorithm. Modified from Adam in PyTorch
reference: AdaBelief Optimizer, adapting stepsizes by the belief in observed gradients, NeurIPS 2020
For a complete table of recommended hyperparameters, see https://github.com/juntang-zhuang/Adabelief-Optimizer’ For example train/args for EfficientNet see these gists
( closure = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: typing.Optional[float] = None eps: float = 1e-30 eps_scale: float = 0.001 clip_threshold: float = 1.0 decay_rate: float = -0.8 betas: typing.Optional[typing.Tuple[float, float]] = None weight_decay: float = 0.0 scale_parameter: bool = True warmup_init: bool = False min_dim_size_to_factor: int = 16 caution: bool = False )
Implements Adafactor algorithm.
This implementation is based on: Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
(see https://arxiv.org/abs/1804.04235)
Note that this optimizer internally adjusts the learning rate depending on the scale_parameter, relative_step and warmup_init options.
To use a manual (external) learning rate schedule you should set scale_parameter=False
and
relative_step=False
.
Ags: params: iterable of parameters to optimize or dicts defining parameter groups lr: external learning rate eps: regularization constants for square gradient and parameter scale respectively eps_scale: regularization constants for parameter scale respectively clip_threshold: threshold of root-mean-square of final gradient update decay_rate: coefficient used to compute running averages of square gradient beta1: coefficient used for computing running averages of gradient weight_decay: weight decay scale_parameter: if True, learning rate is scaled by root-mean-square of parameter warmup_init: time-dependent learning rate computation depends on whether warm-up initialization is being used
( closure = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 1.0 min_dim_size_to_factor: int = 16 decay_rate: float = 0.8 decay_offset: int = 0 beta2_cap: float = 0.999 momentum: typing.Optional[float] = 0.9 momentum_dtype: typing.Union[str, torch.dtype] = torch.bfloat16 eps: typing.Optional[float] = None weight_decay: float = 0.0 clipping_threshold: typing.Optional[float] = None unscaled_wd: bool = False caution: bool = False foreach: typing.Optional[bool] = False )
PyTorch implementation of BigVision’s Adafactor variant with both single and multi tensor implementations.
Adapted from https://github.com/google-research/big_vision by Ross Wightman
( params lr = 0.1 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0.0 hessian_power = 1.0 update_each = 1 n_samples = 1 avg_conv_kernel = False )
Parameters
z
for the approximation of the hessian trace (default: 1) Implements the AdaHessian algorithm from “ADAHESSIAN: An Adaptive Second OrderOptimizer for Machine Learning”
Gets all parameters in all param_groups with gradients
Computes the Hutchinson approximation of the hessian trace and accumulates it for each trainable parameter.
( closure = None )
Performs a single optimization step.
Zeros out the accumalated hessian traces.
( params lr = 0.001 betas = (0.9, 0.999) eps = 1e-08 weight_decay = 0 delta = 0.1 wd_ratio = 0.1 nesterov = False )
( params lr: float = 0.001 betas: typing.Tuple[float, float, float] = (0.98, 0.92, 0.99) eps: float = 1e-08 weight_decay: float = 0.0 no_prox: bool = False foreach: bool = True )
Parameters
Implements a pytorch variant of Adan.
Adan was proposed in Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models https://arxiv.org/abs/2208.06677
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: typing.Union[float, torch.Tensor] = 0.001 betas: typing.Tuple[float, float] = (0.9, 0.9999) eps: float = 1e-06 clip_exp: typing.Optional[float] = 0.333 weight_decay: float = 0.0 decoupled: bool = False caution: bool = False foreach: typing.Optional[bool] = False maximize: bool = False capturable: bool = False differentiable: bool = False )
ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate: https://arxiv.org/abs/2411.02853
( closure = None )
Perform a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.001 bias_correction: bool = True betas: typing.Tuple[float, float] = (0.9, 0.999) eps: float = 1e-06 weight_decay: float = 0.01 grad_averaging: bool = True max_grad_norm: typing.Optional[float] = 1.0 trust_clip: bool = False always_adapt: bool = False caution: bool = False )
Parameters
Implements a pure pytorch variant of FuseLAMB (NvLamb variant) optimizer from apex.optimizers.FusedLAMB reference: https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/Transformer-XL/pytorch/lamb.py
LAMB was proposed in:
( closure = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.0004 betas: typing.Tuple[float, float] = (0.9, 0.999) eps: float = 1e-15 weight_decay: float = 0.0 caution: bool = False )
LaProp Optimizer
Paper: LaProp: Separating Momentum and Adaptivity in Adam, https://arxiv.org/abs/2002.04839
( closure = None )
Performs a single optimization step.
( params lr = 1.0 momentum = 0 dampening = 0 weight_decay = 0 nesterov = False trust_coeff = 0.001 eps = 1e-08 trust_clip = False always_adapt = False )
Parameters
LARS for PyTorch
Paper: Large batch training of Convolutional Networks
- https://arxiv.org/pdf/1708.03888.pdf
( closure = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.0001 betas: typing.Tuple[float, float] = (0.9, 0.99) weight_decay: float = 0.0 caution: bool = False maximize: bool = False foreach: typing.Optional[bool] = None )
Implements Lion algorithm.
( closure = None )
Performs a single optimization step.
( params: typing.Any lr: float = 0.01 momentum: float = 0.9 weight_decay: float = 0 eps: float = 1e-06 decoupled_decay: bool = False )
Parameters
MADGRAD_: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization.
.. _MADGRAD: https://arxiv.org/abs/2101.11075
MADGRAD is a general purpose optimizer that can be used in place of SGD or Adam may converge faster and generalize better. Currently GPU-only. Typically, the same learning rate schedule that is used for SGD or Adam may be used. The overall learning rate is not comparable to either method and should be determined by a hyper-parameter sweep.
MADGRAD requires less weight decay than other methods, often as little as zero. Momentum values used for SGD or Adam’s beta1 should work here also.
On sparse problems both weight_decay and momentum should be set to 0.
( closure: typing.Optional[typing.Callable[[], float]] = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.003 betas: typing.Tuple[float, float] = (0.9, 0.99) eps: float = 1e-08 weight_decay: float = 0.0 gamma: float = 0.025 mars_type: str = 'adamw' optimize_1d: bool = False lr_1d_factor: float = 1.0 betas_1d: typing.Optional[typing.Tuple[float, float]] = None caution: bool = False )
MARS Optimizer
Paper: MARS: Unleashing the Power of Variance Reduction for Training Large Models https://arxiv.org/abs/2411.10438
( closure = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.001 betas: typing.Tuple[float, float] = (0.9, 0.999) eps: float = 1e-08 weight_decay: float = 0.01 caution: bool = False maximize: bool = False foreach: typing.Optional[bool] = None capturable: bool = False )
Parameters
Implements NAdamW algorithm.
See Table 1 in https://arxiv.org/abs/1910.05446 for the implementation of the NAdam algorithm (there is also a comment in the code which highlights the only difference of NAdamW and AdamW).
For further details regarding the algorithm we refer to
( closure = None )
Performs a single optimization step.
( params lr = 0.001 betas = (0.95, 0.98) eps = 1e-08 weight_decay = 0 grad_averaging = False amsgrad = False )
Parameters
On the Convergence of Adam and Beyond
_
(default: False) Implements Novograd algorithm.
( closure = None )
Performs a single optimization step.
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.01 alpha: float = 0.9 eps: float = 1e-10 weight_decay: float = 0 momentum: float = 0.0 centered: bool = False decoupled_decay: bool = False lr_in_momentum: bool = True caution: bool = False )
Parameters
True
, compute the centered RMSProp, the gradient is normalized by an estimation of its variance Implements RMSprop algorithm (TensorFlow style epsilon)
NOTE: This is a direct cut-and-paste of PyTorch RMSprop with eps applied before sqrt and a few other modifications to closer match Tensorflow for matching hyper-params.
Noteworthy changes include:
Proposed by G. Hinton in his course.
The centered version first appears in Generating Sequences With Recurrent Neural Networks.
( closure = None )
Performs a single optimization step.
( params lr = <required parameter> momentum = 0 dampening = 0 weight_decay = 0 nesterov = False eps = 1e-08 delta = 0.1 wd_ratio = 0.1 )
( params: typing.Union[typing.Iterable[torch.Tensor], typing.Iterable[typing.Dict[str, typing.Any]]] lr: float = 0.001 momentum: float = 0.0 dampening: float = 0.0 weight_decay: float = 0.0 nesterov: bool = False caution: bool = False maximize: bool = False foreach: typing.Optional[bool] = None differentiable: bool = False )
( closure = None )
Performs a single optimization step.