Configure Learning Rate Scheduler
We use PyTorch Learning Rate Scheduler objects to train models within the framework of PyTorch and Lightning.
As continual learning involves multiple tasks, each task is supposed to be given a learning rate scheduler for training. We can either use a uniform learning rate scheduler across all tasks or assign distinct learning rate scheduler to each task.
If you donβt want to use a learning rate scheduler, you can set the field /lr_scheduler
in the experiment index config to null
, or simply remove this field.
Configure Uniform Learning Rate Scheduler For All Tasks
To configure uniform learning rate scheduler for all tasks for your experiment, link the /lr_scheduler
field in the experiment index config to a YAML file in lr_scheduler/ subfolder of your configs. That YAML file should use _target_
field to link to a PyTorch learning rate scheduler class and specify its arguments in the following field. Here is an example:
./clarena/example_configs
βββ __init__.py
βββ entrance.yaml
βββ experiment
β βββ example.yaml
β βββ ...
βββ lr_scheduler
β βββ reduce_lr_on_plateau_10_tasks.yaml
β βββ reduce_lr_on_plateau.yaml
...
example_configs/experiment/example.yaml
defaults:
...
- /lr_scheduler: reduce_lr_on_plateau.yaml
...
example_configs/lr_scheduler/reduce_lr_on_plateau.yaml
_target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
Make sure to include field _partial_: True
to enable partial instantiation. PyTorch learning rate scheduler need the optimizer as an argument to be fully instantiated, but we are now in the phase of configuration and certainly donβt have that argument, so the learning rate scheduler can be only partially instantiated.
Configure Distinct Learning Rate Scheduler For Each Task
To configure distinct learning rate scheduler for each task for your experiment, the YAML file linked in lr_scheduler/ subfolder should be a list of PyTorch learning rate scheduler classes. Each class is assigned to a task. The length of the list must be equal to field num_tasks
in experiment index config.
example_configs/lr_scheduler/reduce_lr_on_plateau_10_tasks.yaml
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
- _target_: torch.optim.lr_scheduler.ReduceLROnPlateau
_partial_: true # partially instantiate learning rate scheduler without 'optimizer' argument. Make sure this is included in any case!
mode: min
factor: 0.1
patience: 10
Supported Learning Rate Schedulers
We fully support all built-in learning rate schedulers defined in PyTorch. Please refer to PyTorch documentation to see the full list. Please also refer to PyTorch documentation of each learning rate scheduler class to learn its required arguments.
Optimisation-based approaches in continual learning methodology focus on designing mechanisms from the perspective of manipulating optimisation step. These approaches may involve using different learning rate schedulers for various tasks. However, the evolution of optimisation can be directly integrated into the CL algorithm so we donβt particularly design our own CL learning rate schedulers.