Shawn’s Blog
  • πŸ—‚οΈ Collections
    • πŸ–₯️ Slides Gallery
    • πŸ§‘β€πŸ³οΈ Cooking Ideas
    • 🍱 Cookbook
    • πŸ’¬ Language Learning
    • 🎼 Songbook
  • βš™οΈ Projects
    • βš› Continual Learning Arena
  • πŸ“„ Papers
    • AdaHAT
    • FG-AdaHAT
  • πŸŽ“ CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Multi-Task Learning (MTL)
  2. Configure MTL Experiment
  3. Learning Rate Scheduler
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference

On this page

  • Example
    • No Learning Rate Scheduler
  • Supported Learning Rate Schedulers & Required Config Fields
  1. Multi-Task Learning (MTL)
  2. Configure MTL Experiment
  3. Learning Rate Scheduler

Configure Learning Rate Scheduler (MTL)

Modified

August 16, 2025

Learning rate scheduler is a component that adjusts the learning rate during training, which can help improve convergence and performance.

Learning rate scheduler is a sub-config under the experiment index config. To configure a custom learning rate scheduler, you need to create a YAML file in lr_scheduler/ folder. The learning rate scheduler is optional, which means you can apply no learning rate scheduler. Below shows an example of the learning rate scheduler config.

Example

configs
β”œβ”€β”€ __init__.py
β”œβ”€β”€ entrance.yaml
β”œβ”€β”€ experiment
β”‚   β”œβ”€β”€ example_mtl_train.yaml
β”‚   └── ...
β”œβ”€β”€ lr_scheduler
β”‚   └── reduce_lr_on_plateau.yaml
...
configs/experiment/example_mtl_train.yaml
defaults:
  ...
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  ...
configs/lr_scheduler/reduce_lr_on_plateau.yaml
_target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10

No Learning Rate Scheduler

If you don’t want to use a learning rate scheduler, you can set the field /lr_scheduler in the experiment index config to null, or simply remove this field.

defaults:
  ...
  - /lr_scheduler: null # or simply remove this field
  ...

Supported Learning Rate Schedulers & Required Config Fields

In CLArena, we didn’t implement our own learning rate schedulers, but rather use the built-in learning rate schedulers from PyTorch in torch.optim.lr_schedulermodule.

To choose a learning rate scheduler, assign the _target_ field to the class name of the scheduler. For example, to use ReduceLROnPlateau, set _target_ field to torch.optim.lr_scheduler.ReduceLROnPlateau. Meanwhile, include _partial: true as well (see below). Each learning rate scheduler has its own hyperparameters and configurations, which means it has its own required fields. The required fields are the same as the arguments of the class specified by _target_. The arguments of each learning rate scheduler class can be found in PyTorch documentation.

PyTorch Documentation (Built-In Learning Rate Schedulers)

Warning

Make sure to include field _partial_: True to enable partial instantiation. PyTorch learning rate scheduler need the optimizer as an argument to be fully instantiated, but we are now in the phase of configuration and certainly don’t have that argument, so the learning rate scheduler can be only partially instantiated.

Back to top
Optimizer
Trainer
 
 

©️ 2025 Pengxiang Wang. All rights reserved.