Shawn’s Blog
  • πŸ—‚οΈ Collections
    • πŸ–₯️ Slides Gallery
    • πŸ§‘β€πŸ³οΈ Cooking Ideas
    • 🍱 Cookbook
    • πŸ’¬ Language Learning
    • 🎼 Songbook
  • βš™οΈ Projects
    • βš› Continual Learning Arena
  • πŸ“„ Papers
    • AdaHAT
    • FG-AdaHAT
  • πŸŽ“ CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Components
  2. Learning Rate Scheduler
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Implement Your Modules (TBC)
  • API Reference
  1. Components
  2. Learning Rate Scheduler

Configure Learning Rate Scheduler(s)

Modified

August 30, 2025

Learning rate scheduler is a component that adjusts the learning rate during training, which can help improve convergence and performance.

Learning rate scheduler is a sub-config under the index config of:

  • Continual learning main experiment
  • Continual learning full experiment and the reference experiments
  • Continual unlearning main experiment
  • Continual unlearning full experiment and the reference experiments
  • Multi-task learning experiment
  • Single-task learning experiment

To configure a custom scheduler, create a YAML file in the lr_scheduler/ folder. As continual learning and unlearning involves multiple tasks, each task can have its own scheduler. We support a uniform scheduler for all tasks and distinct schedulers for each task. For multi-task and single-task learning, there is only one uniform scheduler in the experiment. The scheduler is optional; you can also choose to use no scheduler either uniformly or for specific tasks. Below are examples of both configurations.

Example

configs
β”œβ”€β”€ __init__.py
β”œβ”€β”€ entrance.yaml
β”œβ”€β”€ index
β”‚   β”œβ”€β”€ example_cl_main_expr.yaml
β”‚   └── ...
β”œβ”€β”€ lr_scheduler
β”‚   β”œβ”€β”€ reduce_lr_on_plateau_10_tasks.yaml
β”‚   └── reduce_lr_on_plateau.yaml
...

No Learning Rate Scheduler

If you do not want to use a learning rate scheduler, set the /lr_scheduler field in the index config to null, or simply remove the field.

defaults:
  ...
  - /lr_scheduler: null # or simply remove this field
  ...

Uniform Learning Rate Scheduler

configs/index/example_cl_main_expr.yaml
defaults:
  ...
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  ...
configs/lr_scheduler/reduce_lr_on_plateau.yaml
_target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10

Distinct Learning Rate Scheduler for Each Task (Continual Learning)

Distinct schedulers are specified as a dictionary. The key and length of the dictionary must match the train_tasks field in the index config of continual learning. To disable the scheduler for a specific task, set the corresponding element in the dictionary to null. Below is an example for 10 tasks, where task 2 and task 3 have no scheduler.

defaults:
  ...
  - /lr_scheduler: reduce_lr_on_plateau_10_tasks.yaml
  ...
configs/lr_scheduler/reduce_lr_on_plateau_10_tasks.yaml
1:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
2:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
3:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
4:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
5:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
6:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
7:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
8:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
9:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
10:
  _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: True # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10

Supported Learning Rate Schedulers & Required Config Fields

In CLArena, we do not implement custom learning rate schedulers. We use the built-in learning rate schedulers from PyTorch in the torch.optim.lr_scheduler module.

Note

Optimization-based approaches in continual learning methodology focus on designing mechanisms from the perspective of manipulating optimization step. These approaches may involve using different learning rate schedulers for various tasks. However, the evolution of optimization can be directly integrated into the CL algorithm so we don’t particularly design our own CL learning rate schedulers.

To choose a scheduler, assign the _target_ field to the class name of the scheduler. For example, to use ReduceLROnPlateau, set the _target_ field to torch.optim.lr_scheduler.ReduceLROnPlateau. Please include _partial_: true (see below). Each scheduler has its own hyperparameters and configurations, which means it has its own required fields. The required fields are the same as the arguments of the class specified by _target_. The arguments for each scheduler class can be found in the PyTorch documentation.

PyTorch Documentation (Built-In Learning Rate Schedulers)

Warning

Make sure to include the field _partial_: true to enable partial instantiation. PyTorch learning rate schedulers need the optimizer as an argument to be fully instantiated, but during configuration we do not have that argument yet, so the scheduler can only be partially instantiated.

Back to top
Optimizer
Trainer
 
 

©️ 2025 Pengxiang Wang. All rights reserved.