Continual Learning Main Experiment
Continual learning main experiment is the core experiment in CLArena for training and evaluating continual learning algorithms. This section defines its pipeline and guides you through configuring custom experiments.
Experiment Pipeline
For each task
- Training Phase
- The model is trained on current task data
for the specified number of epochs - The continual learning algorithm’s mechanisms are applied during this phase
- Model parameters are updated while attempting to preserve knowledge from previous tasks
- The model is trained on current task data
- Validation Phase
- At the end of each training epoch, the model is validated on
- Validation results can optionally guide model selection and hyperparameter tuning
- At the end of each training epoch, the model is validated on
- Evaluation Phase
- After completing training and validation, the model is evaluated on test datasets
- Testing occurs across all previously seen tasks:
- The evaluation assesses both task-specific and overall performance
- Note: This phase may be skipped based on configuration settings (see
eval_after_tasks
in required config fields)
Running
To run a continual learning main experiment, specify the CL_MAIN_EXPR
indicator in the command:
clarena pipeline=CL_MAIN_EXPR index=<index-config-name>
Configuration
To run a custom continual learning main experiment, create a YAML file in the index/
folder as index config. Below is an example.
Example
configs/index/example_clmain_expr.yaml
# @package _global_
# make sure to include the above commented global setting!
# pipeline info
pipeline: CL_MAIN_EXPR
expr_name: example_clmain_expr
train_tasks: 10
eval_after_tasks: 10
global_seed: 1
# paradigm settings
cl_paradigm: TIL
# components
defaults:
- /cl_dataset: permuted_mnist.yaml
- /backbone: clmlp.yaml
- /cl_algorithm: finetuning.yaml
- /optimizer: adam.yaml
- /lr_scheduler: reduce_lr_on_plateau.yaml
- /trainer: cpu.yaml
- /metrics: cl_default.yaml
- /lightning_loggers: default.yaml
- /callbacks: cl_default.yaml
- /hydra: default.yaml
- /misc: default.yaml
# outputs
output_dir: outputs/${expr_name}/${misc.timestamp}
# overrides
trainer:
max_epochs: 2
Required Config Fields
Below is the list of required config fields for the index config of continual learning main experiment.
Field | Description | Allowed Values |
---|---|---|
pipeline |
The default pipeline that clarena use the config to run |
|
expr_name |
The name of the experiment |
|
train_tasks |
The list of task IDs1 to train |
|
eval_after_tasks |
If task ID |
|
global_seed |
The global seed for the entire experiment |
|
cl_paradigm |
The continual learning paradigm |
|
/cl_dataset |
The continual learning dataset on which the experiment is conducted |
|
/cl_algorithm |
The continual learning algorithm |
|
/backbone |
The backbone network on which the continual learning algorithm is based |
|
/optimizer |
The optimizer for each task |
|
/lr_scheduler |
The learning rate scheduler for each task |
|
/trainer |
The PyTorch Lightning Trainer object that contains all configurations for the training, validation and test process |
|
/metrics |
The metrics to be monitored, logged, or visualized |
|
/lightning_loggers |
The Lightning Loggers used to log metrics and results |
|
/callbacks |
The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated at different points during the experiment |
|
/hydra |
Configuration for Hydra |
|
/misc |
Miscellaneous configurations that are less related to the experiment |
|
output_dir |
The folder storing the experiment results |
|
The continual learning main experiment is managed by the CLMainExperiment
class. To learn how these fields work, please refer to its source code.
Footnotes
The task IDs
are integers ranging from 1 to the number of tasks in the CL dataset. Each corresponds to a task-specific dataset within the CL dataset.↩︎