model_or_params — A PyTorch model or an iterable of parameters/parameter groups.
If a model is provided, parameters will be automatically extracted and grouped
based on the other arguments.
opt — Name of the optimizer to create (e.g., ‘adam’, ‘adamw’, ‘sgd’).
Use list_optimizers() to see available options.
lr — Learning rate. If None, will use the optimizer’s default.
weight_decay — Weight decay factor. Will be used to create param groups if model_or_params is a model.
momentum — Momentum factor for optimizers that support it. Only used if the
chosen optimizer accepts a momentum parameter.
foreach — Enable/disable foreach (multi-tensor) implementation if available.
If None, will use optimizer-specific defaults.
filter_bias_and_bn — If True, bias, norm layer parameters (all 1d params) will not have
weight decay applied. Only used when model_or_params is a model and
weight_decay > 0.
layer_decay — Optional layer-wise learning rate decay factor. If provided,
learning rates will be scaled by layer_decay^(max_depth - layer_depth).
Only used when model_or_params is a model.
param_group_fn — Optional function to create custom parameter groups.
If provided, other parameter grouping options will be ignored.
**kwargs — Additional optimizer-specific arguments (e.g., betas for Adam).
Create an optimizer instance via timm registry.
Creates and configures an optimizer with appropriate parameter groups and settings.
Supports automatic parameter group creation for weight decay and layer-wise learning
rates, as well as custom parameter grouping.
filter — Wildcard style filter string or list of filter strings
(e.g., ‘adam’ for all Adam variants, or [‘adam’, ‘*8bit’] for
Adam variants and 8-bit optimizers). Empty string means no filtering.
exclude_filters — Optional list of wildcard patterns to exclude. For example,
[’8bit’, ‘fused’] would exclude 8-bit and fused implementations.
with_description — If True, returns tuples of (name, description) instead of
just names. Descriptions provide brief explanations of optimizer characteristics.
Returns
If with_description is False
List of optimizer names as strings (e.g., [‘adam’, ‘adamw’, …])
If with_description is True:
List of tuples of (name, description) (e.g., [(‘adam’, ‘Adaptive Moment…’), …])
List available optimizer names, optionally filtered.
List all registered optimizers, with optional filtering using wildcard patterns.
Optimizers can be filtered using include and exclude patterns, and can optionally
return descriptions with each optimizer name.
Examples:
list_optimizers()
[‘adam’, ‘adamw’, ‘sgd’, …]
list_optimizers([‘la’, ‘nla’]) # List lamb & lars
[‘lamb’, ‘lambc’, ‘larc’, ‘lars’, ‘nlarc’, ‘nlars’]
list_optimizers(with_description=True) # Get descriptions
[(‘adabelief’, ‘Adapts learning rate based on gradient prediction error’),
(‘adadelta’, ‘torch.optim Adadelta, Adapts learning rates based on running windows of gradients’),
(‘adafactor’, ‘Memory-efficient implementation of Adam with factored gradients’),
…]
(name: strbind_defaults: bool = True)→If bind_defaults is False
Parameters
name — Name of the optimizer to retrieve (e.g., ‘adam’, ‘sgd’)
bind_defaults — If True, returns a partial function with default arguments from OptimInfo bound.
If False, returns the raw optimizer class.
Returns
If bind_defaults is False
The optimizer class (e.g., torch.optim.Adam)
If bind_defaults is True:
A partial function with default arguments bound
Raises
ValueError
ValueError — If optimizer name is not found in registry
Get optimizer class by name with option to bind default arguments.
Retrieves the optimizer class or a partial function with default arguments bound.
This allows direct instantiation of optimizers with their default configurations
without going through the full factory.
amsgrad (boolean, optional) — whether to use the AMSGrad variant of this
algorithm from the paper On the Convergence of Adam and Beyond_
(default: False)
decoupled_decay (boolean, optional) — (default: True) If set as True, then
the optimizer uses decoupled weight decay as in AdamW
fixed_decay (boolean, optional) — (default: False) This is used when weightdecouple
is set as True.
When fixed_decay == True, the weight decay is performed as
$W{new} = W{old} - W{old} \times decay$.
When fixeddecay == False, the weight decay is performed as
$W{new} = W{old} - W{old} \times decay \times lr$. Note that in this case, the
weight decay ratio decreases with learning rate (lr).
rectify (boolean, optional) — (default: True) If set as True, then perform the rectified
update similar to RAdam
degenerated_to_sgd (boolean, optional) (default —True) If set as True, then perform SGD update
when variance of gradient is high
Implements AdaBelief algorithm. Modified from Adam in PyTorch
reference: AdaBelief Optimizer, adapting stepsizes by the belief in observed gradients, NeurIPS 2020
amsgrad (boolean, optional) — whether to use the AMSGrad variant of this
algorithm from the paper On the Convergence of Adam and Beyond_
(default: False)
Implements AdamW algorithm.
The original Adam algorithm was proposed in Adam: A Method for Stochastic Optimization.
The AdamW variant was proposed in Decoupled Weight Decay Regularization.
no_prox (bool) — how to perform the decoupled weight decay (default: False)
Implements a pytorch variant of Adan
Adan was proposed in
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models[J]. arXiv preprint arXiv:2208.06677, 2022.
https://arxiv.org/abs/2208.06677
MADGRAD is a general purpose optimizer that can be used in place of SGD or
Adam may converge faster and generalize better. Currently GPU-only.
Typically, the same learning rate schedule that is used for SGD or Adam may
be used. The overall learning rate is not comparable to either method and
should be determined by a hyper-parameter sweep.
MADGRAD requires less weight decay than other methods, often as little as
zero. Momentum values used for SGD or Adam’s beta1 should work here also.
On sparse problems both weight_decay and momentum should be set to 0.
See Table 1 in https://arxiv.org/abs/1910.05446 for the implementation of
the NAdam algorithm (there is also a comment in the code which highlights
the only difference of NAdamW and AdamW).
For further details regarding the algorithm we refer to
Decoupled Weight Decay Regularization_.
amsgrad (boolean, optional) — whether to use the AMSGrad variant of this
algorithm from the paper On the Convergence of Adam and Beyond_
(default: False)
NOTE: This is a direct cut-and-paste of PyTorch RMSprop with eps applied before sqrt
and a few other modifications to closer match Tensorflow for matching hyper-params.
Noteworthy changes include:
Epsilon applied inside square-root
square_avg initialized to ones
LR scaling of update accumulated in momentum buffer