timm documentation

Learning Rate Schedulers

You are viewing v1.0.14 version. A newer version v1.0.15 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Learning Rate Schedulers

This page contains the API reference documentation for learning rate schedulers included in timm.

Schedulers

Factory functions

timm.scheduler.create_scheduler

< >

( argsoptimizer: Optimizerupdates_per_epoch: int = 0 )

timm.scheduler.create_scheduler_v2

< >

( optimizer: Optimizersched: str = 'cosine'num_epochs: int = 300decay_epochs: int = 90decay_milestones: typing.List[int] = (90, 180, 270)cooldown_epochs: int = 0patience_epochs: int = 10decay_rate: float = 0.1min_lr: float = 0warmup_lr: float = 1e-05warmup_epochs: int = 0warmup_prefix: bool = Falsenoise: typing.Union[float, typing.List[float]] = Nonenoise_pct: float = 0.67noise_std: float = 1.0noise_seed: int = 42cycle_mul: float = 1.0cycle_decay: float = 0.1cycle_limit: int = 1k_decay: float = 1.0plateau_mode: str = 'max'step_on_epochs: bool = Trueupdates_per_epoch: int = 0 )

Scheduler Classes

class timm.scheduler.CosineLRScheduler

< >

( optimizer: Optimizert_initial: intlr_min: float = 0.0cycle_mul: float = 1.0cycle_decay: float = 1.0cycle_limit: int = 1warmup_t = 0warmup_lr_init = 0warmup_prefix = Falset_in_epochs = Truenoise_range_t = Nonenoise_pct = 0.67noise_std = 1.0noise_seed = 42k_decay = 1.0initialize = True )

Cosine decay with restarts. This is described in the paper https://arxiv.org/abs/1608.03983.

Inspiration from https://github.com/allenai/allennlp/blob/master/allennlp/training/learning_rate_schedulers/cosine.py

k-decay option based on k-decay: A New Method For Learning Rate Schedule - https://arxiv.org/abs/2004.05909

class timm.scheduler.MultiStepLRScheduler

< >

( optimizer: Optimizerdecay_t: typing.List[int]decay_rate: float = 1.0warmup_t = 0warmup_lr_init = 0warmup_prefix = Truet_in_epochs = Truenoise_range_t = Nonenoise_pct = 0.67noise_std = 1.0noise_seed = 42initialize = True )

class timm.scheduler.PlateauLRScheduler

< >

( optimizerdecay_rate = 0.1patience_t = 10verbose = Truethreshold = 0.0001cooldown_t = 0warmup_t = 0warmup_lr_init = 0lr_min = 0mode = 'max'noise_range_t = Nonenoise_type = 'normal'noise_pct = 0.67noise_std = 1.0noise_seed = Noneinitialize = True )

Decay the LR by a factor every time the validation loss plateaus.

class timm.scheduler.PolyLRScheduler

< >

( optimizer: Optimizert_initial: intpower: float = 0.5lr_min: float = 0.0cycle_mul: float = 1.0cycle_decay: float = 1.0cycle_limit: int = 1warmup_t = 0warmup_lr_init = 0warmup_prefix = Falset_in_epochs = Truenoise_range_t = Nonenoise_pct = 0.67noise_std = 1.0noise_seed = 42k_decay = 1.0initialize = True )

Polynomial LR Scheduler w/ warmup, noise, and k-decay

k-decay option based on k-decay: A New Method For Learning Rate Schedule - https://arxiv.org/abs/2004.05909

class timm.scheduler.StepLRScheduler

< >

( optimizer: Optimizerdecay_t: floatdecay_rate: float = 1.0warmup_t = 0warmup_lr_init = 0warmup_prefix = Truet_in_epochs = Truenoise_range_t = Nonenoise_pct = 0.67noise_std = 1.0noise_seed = 42initialize = True )

class timm.scheduler.TanhLRScheduler

< >

( optimizer: Optimizert_initial: intlb: float = -7.0ub: float = 3.0lr_min: float = 0.0cycle_mul: float = 1.0cycle_decay: float = 1.0cycle_limit: int = 1warmup_t = 0warmup_lr_init = 0warmup_prefix = Falset_in_epochs = Truenoise_range_t = Nonenoise_pct = 0.67noise_std = 1.0noise_seed = 42initialize = True )

Hyberbolic-Tangent decay with restarts. This is described in the paper https://arxiv.org/abs/1806.01593

< > Update on GitHub