Shawn’s Blog
  • πŸ—‚οΈ Collections
    • πŸ–₯️ Slides Gallery
    • πŸ§‘β€πŸ³οΈ Cooking Ideas
    • 🍱 Cookbook
    • πŸ’¬ Language Learning
    • 🎼 Songbook
  • βš™οΈ Projects
    • βš› Continual Learning Arena
  • πŸ“„ Papers
    • AdaHAT
    • FG-AdaHAT
  • πŸŽ“ CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Continual Learning (CL)
  2. Configure CL Main Experiment
  3. Learning Rate Scheduler
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference
  1. Continual Learning (CL)
  2. Configure CL Main Experiment
  3. Learning Rate Scheduler

Configure Learning Rate Scheduler(s) (CL Main)

Modified

August 16, 2025

Learning rate scheduler is a component that adjusts the learning rate during training, which can help improve convergence and performance.

Learning rate scheduler is a sub-config under the experiment index config (CL Main). To configure a custom learning rate scheduler, you need to create a YAML file in lr_scheduler/ folder. As continual learning involves multiple tasks, each task can be given a learning rate scheduler for adjusting learning rate. We support uniform learning rate scheduler across all tasks and distinct learning rate scheduler to each task. The learning rate scheduler is optional, which means you can apply no learning rate scheduler both uniformly or distinctly for specific tasks. Below shows examples of the learning rate scheduler config for configuring both uniform and distinct learning rate schedulers.

Example

configs
β”œβ”€β”€ __init__.py
β”œβ”€β”€ entrance.yaml
β”œβ”€β”€ experiment
β”‚   β”œβ”€β”€ example_clmain_train.yaml
β”‚   └── ...
β”œβ”€β”€ lr_scheduler
β”‚   β”œβ”€β”€ reduce_lr_on_plateau_10_tasks.yaml
β”‚   └── reduce_lr_on_plateau.yaml
...

No Learning Rate Scheduler

If you don’t want to use a learning rate scheduler, you can set the field /lr_scheduler in the experiment index config to null, or simply remove this field.

defaults:
  ...
  - /lr_scheduler: null # or simply remove this field
  ...

Uniform Learning Rate Scheduler for All Tasks

configs/experiment/example_clmain_train.yaml
defaults:
  ...
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  ...
configs/lr_scheduler/reduce_lr_on_plateau.yaml
_target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10

Distinct Learning Rate Scheduler For Each Task

Distinct optimizers are specified as a list of optimizers. The length of the list must match the field train_tasks in experiment index config (CLMain). To disable learning rate scheduler for a specific task, you can set the corresponding element in the list to null. Below is an example of distinct learning rate schedulers for 10 tasks, where task 2 and task 3 have no learning rate scheduler.

defaults:
  ...
  - /lr_scheduler: reduce_lr_on_plateau_10_tasks.yaml
  ...
configs/lr_scheduler/reduce_lr_on_plateau_10_tasks.yaml
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- null # no learning rate scheduler for task 2
- null # no learning rate scheduler for task 3
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
  _partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
  mode: min
  factor: 0.1
  patience: 10

Supported Learning Rate Schedulers & Required Config Fields

In CLArena, we didn’t implement our own learning rate schedulers, but rather use the built-in learning rate schedulers from PyTorch in torch.optim.lr_schedulermodule.

Note

Optimization-based approaches in continual learning methodology focus on designing mechanisms from the perspective of manipulating optimization step. These approaches may involve using different learning rate schedulers for various tasks. However, the evolution of optimization can be directly integrated into the CL algorithm so we don’t particularly design our own CL learning rate schedulers.

To choose a learning rate scheduler, assign the _target_ field to the class name of the scheduler. For example, to use ReduceLROnPlateau, set _target_ field to torch.optim.lr_scheduler.ReduceLROnPlateau. Meanwhile, include _partial: true as well (see below). Each learning rate scheduler has its own hyperparameters and configurations, which means it has its own required fields. The required fields are the same as the arguments of the class specified by _target_. The arguments of each learning rate scheduler class can be found in PyTorch documentation.

PyTorch Documentation (Built-In Learning Rate Schedulers)

Warning

Make sure to include field _partial_: True to enable partial instantiation. PyTorch learning rate scheduler need the optimizer as an argument to be fully instantiated, but we are now in the phase of configuration and certainly don’t have that argument, so the learning rate scheduler can be only partially instantiated.

Back to top
Optimizer
Trainer
 
 

©️ 2025 Pengxiang Wang. All rights reserved.