Configure CL Algorithm
The continual learning algorithm is the core of continual learning. It determines how sequential tasks are learned and manages the interaction between previous and new tasks. If you are not familiar with continual learning algorithms, feel free to learn more from my beginners’ guide: baseline algorithms and CL methodology.
The CL algorithm is a sub-config under the index config of:
- Continual learning main experiment and evaluation
- Continual learning full experiment and the reference experiments
- Continual unlearning main experiment and evaluation
- Continual unlearning full experiment and the reference experiments
To configure a custom CL algorithm, create a YAML file in the cl_algorithm/ folder. Below is an example of the CL algorithm config.
Example
configs
├── __init__.py
├── entrance.yaml
├── index
│ ├── example_cl_main_expr.yaml
│ └── ...
├── cl_algorithm
│ └── finetuning.yaml
...
example_configs/index/example_cl_main_expr.yaml
defaults:
...
- /cl_algorithm: finetuning.yaml
...example_configs/cl_algorithm/finetuning.yaml
_target_: clarena.cl_algorithms.FinetuningSupported CL Algorithms & Required Config Fields
In CLArena, we have implemented many CL algorithms as Python classes in the clarena.cl_algorithms module that you can use for your experiments.
To choose a CL algorithm, assign the _target_ field to the class name of the algorithm. For example, to use Finetuning, set the _target_ field to clarena.cl_algorithms.Finetuning. Each CL algorithm has its own hyperparameters and configurations, which means it has its own required fields. The required fields are the same as the arguments of the class specified by _target_ (excluding backbone, heads and non_algorithmic_params. The arguments for each CL algorithm class can be found in the API documentation.
Below is the full list of supported CL algorithms. Note that the names in the “CL Algorithm” column are the exact class names that you should assign to _target_.
| CL Algorithm | Description | Required Config Fields |
|---|---|---|
| Finetuning | The most naive way for task-incremental learning. It simply initializes the backbone from the last task when training new task. See my continual learning beginners’ guide | Same as Finetuning class arguments (excluding backbone, heads and non_algorithmic_params) |
| Fix | Another naive way for task-incremental learning aside from Finetuning. It simply fixes the backbone forever after training first task. It serves as kind of toy algorithm when discussing stability-plasticity dilemma in continual learning. See my continual learning beginners’ guide | Same as Fix class arguments (excluding backbone, heads and non_algorithmic_params) |
| Independent | Another naive way for task-incremental learning aside from Finetuning. It assigns a new independent model for each task. This is a simple way to avoid catastrophic forgetting at the extreme cost of memory. It achieves the theoretical upper bound of performance in continual learning. See my continual learning beginners’ guide | Same as Independent class arguments (excluding backbone, heads and non_algorithmic_params) |
| Random | Pass the training step and simply use the randomly initialized model to predict the test data. This serves as a reference model to compute forgetting rate. See chapter 4 in HAT (Hard Attention to the Task) paper | Same as Random class arguments (excluding backbone, heads and non_algorithmic_params) |
LwF |
A regularization-based continual learning approach that constrains the feature output of the model to be similar to that of the previous tasks. From the perspective of knowledge distillation, it distills previous tasks models into the training process for new task in the regularization term. It is a simple yet effective method for continual learning | Same as LwF class arguments (excluding backbone, heads and non_algorithmic_params) |
EWC |
A regularization-based approach that calculates the fisher information as parameter importance for the previous tasks and penalizes the current task loss with the importance of the parameters | Same as EWC class arguments (excluding backbone, heads and non_algorithmic_params) |
HAT |
An architecture-based approach that uses learnable hard attention masks to select task-specific parameters | Same as HAT class arguments (excluding backbone, heads and non_algorithmic_params) |
AdaHAT |
An architecture-based approach that improves HAT by introducing adaptive soft gradient clipping based on parameter importance and network sparsity. (This is my work, see Paper: AdaHAT for details) | Same as AdaHAT class arguments (excluding backbone, heads and non_algorithmic_params) |
FGAdaHAT |
An architecture-based approach that improves AdaHAT by fine-grained neuron-wise importance measures guiding the adaptive adjustment mechanism in AdaHAT. (This is my work, see Paper: FG-AdaHAT for details) | Same as FGAdaHAT class arguments (excluding backbone, heads and non_algorithmic_params) |
CBP (bug exists) |
A continual learning approach that reinitializes a small number of units during training, using an utility measures to determine which units to reinitialize. It aims to address loss of plasticity problem for learning new tasks, yet not very well solve the catastrophic forgetting problem in continual learning | Same as CBP class arguments (excluding backbone, heads and non_algorithmic_params) |
WSN |
An architecture-based approach that trains learnable parameter-wise score and select the most scored $c\%$ of the network parameters to be used for each task | Same as WSN class arguments (excluding backbone, heads and non_algorithmic_params) |
NISPA (bug exists) |
An architecture-based approach that selects neurons and weights through manual rules | Same as NISPA class arguments (excluding backbone, heads and non_algorithmic_params) |
AmnesiacHAT (bug exists) |
A variant of HAT enabling HAT with unlearning ability, based on the AdaHAT algorithm | Same as AmnesiacHAT class arguments (excluding backbone, heads and non_algorithmic_params) |
Make sure that the algorithm is compatible with the CL dataset, backbone and paradigm. For example, HAT, AdaHAT, FG-AdaHAT works on HAT mask backbones only.