Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Continual Learning (CL)
  2. Configure CL Main Experiment
  3. CL Algorithm
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference
  1. Continual Learning (CL)
  2. Configure CL Main Experiment
  3. CL Algorithm

Configure CL Algorithm (CL Main)

Modified

August 16, 2025

Continual learning algorithm is the core part of continual learning, determining how sequential tasks are learned and managing interactions between previous and new tasks. If you are not familiar with continual learning algorithms, feel free to gain some knowledge from my continual learning beginners’ guide about the baseline algorithms and CL methodology.

CL algorithm is a sub-config under the experiment index config (CL Main). To configure a custom CL algorithm, you need to create a YAML file in cl_algorithm/ folder. Below shows an example of the CL algorithm config.

Example

configs
├── __init__.py
├── entrance.yaml
├── experiment
│   ├── example_clmain_train.yaml
│   └── ...
├── cl_algorithm
│   └── finetuning.yaml
...
configs/experiment/example_clmain_train.yaml
defaults:
  ...
  - /cl_algorithm: finetuning.yaml
  ...
configs/cl_algorithm/finetuning.yaml
_target_: clarena.cl_algorithms.Finetuning

Supported CL Algorithms & Required Config Fields

In CLArena, we have implemented many CL algorithms as Python classes in clarena.cl_algorithms module that you can use for your experiments.

To choose a CL algorithm, assign the _target_ field to the class name of the CL algorithm. For example, to use the Finetuning algorithm, set the _target_ field to clarena.cl_algorithms.Finetuning. Each CL algorithm has its own hyperparameters and configurations, which means it has its own required fields. The required fields are the same as the arguments of the class specified by _target_ (excluding backbone and heads). The arguments of each CL algorithm class can be found in the API documentation.

API Reference (CL Algorithms) Source Code (CL Algorithms)

Below is the full list of supported CL algorithms. Note that the “CL Algorithm” is exactly the class name that the _target_ field is assigned to.

CL Algorithm Description Required Config Fields
Finetuning The most naive way for task-incremental learning. It simply initializes the backbone from the last task when training new task. (Please refer to my continual learning beginners’ guide. ) Same as Finetuning class arguments (excluding backbone and heads)
Fix Another naive way for task-incremental learning aside from Finetuning. It simply fixes the backbone forever after training first task. It serves as kind of toy algorithm when discussing stability-plasticity dilemma in continual learning. (Please refer to my continual learning beginners’ guide. ) Same as Fix class arguments (excluding backbone and heads)
Independent Another naive way for task-incremental learning aside from Finetuning. It assigns a new independent model for each task. This is a simple way to avoid catastrophic forgetting at the extreme cost of memory. It achieves the theoretical upper bound of performance in continual learning. (Please refer to my continual learning beginners’ guide. ) Same as Independent class arguments (excluding backbone and heads)
Random Pass the training step and simply use the randomly initialized model to predict the test data. This serves as a reference model to compute forgetting rate. See chapter 4 in HAT (Hard Attention to the Task) paper. Same as Random class arguments (excluding backbone and heads)

LwF

[paper]

(Li and Hoiem 2017)

A regularization-based continual learning approach that constrains the feature output of the model to be similar to that of the previous tasks. From the perspective of knowledge distillation, it distills previous tasks models into the training process for new task in the regularization term. It is a simple yet effective method for continual learning. Same as LwF class arguments (excluding backbone and heads)

EWC

[paper]

(Kirkpatrick et al. 2017)

A regularization-based approach that calculates the fisher information as parameter importance for the previous tasks and penalizes the current task loss with the importance of the parameters. Same as EWC class arguments (excluding backbone and heads)

HAT

[paper]

(Serra et al. 2018)

An architecture-based approach that uses learnable hard attention masks to select task-specific parameters. Same as HAT class arguments (excluding backbone and heads)

AdaHAT

[paper]

(Wang et al. 2024)

An architecture-based approach that improves HAT by introducing adaptive soft gradient clipping based on parameter importance and network sparsity. (This is my work, please go to Paper: AdaHAT for details. ) Same as AdaHAT class arguments (excluding backbone and heads)

FGAdaHAT

[code]

An architecture-based approach that improves AdaHAT by fine-grained neuron-wise importance measures guiding the adaptive adjustment mechanism in AdaHAT. (This is my work, please go to Paper: FG-AdaHAT for details. ) Same as FGAdaHAT class arguments (excluding backbone and heads)

CBP

[paper] [code]

[@dohare2024loss]

(bug exists)

A continual learning approach that reinitializes a small number of units during training, using an utility measures to determine which units to reinitialize. It aims to address loss of plasticity problem for learning new tasks, yet not very well solve the catastrophic forgetting problem in continual learning. Same as CBP class arguments (excluding backbone and heads)

WSN

[paper] [code]

(Kang et al. 2022)

An architecture-based approach that trains learnable parameter-wise score and select the most scored $c\%$ of the network parameters to be used for each task. Same as WSN class arguments (excluding backbone and heads)

NISPA

[paper] [code]

(Gurbuz and Dovrolis 2022)

(bug exists)

An architecture-based approach that selects neurons and weights through manual rules. Same as NISPA class arguments (excluding backbone and heads)

AmnesiacHAT

(bug exists)

A variant of HAT enabling HAT with unlearning ability, based on the AdaHAT algorithm. Same as AmnesiacHAT class arguments (excluding backbone and heads)
Warning

Make sure that the algorithm is compatible with the CL dataset, backbone and paradigm. For example, HAT, AdaHAT, FG-AdaHAT works on HAT mask backbones only.

Back to top

References

Gurbuz, Mustafa Burak, and Constantine Dovrolis. 2022. “Nispa: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks.” arXiv Preprint arXiv:2206.09117.
Kang, Haeyong, Rusty John Lloyd Mina, Sultan Rizky Hikmawan Madjid, Jaehong Yoon, Mark Hasegawa-Johnson, Sung Ju Hwang, and Chang D Yoo. 2022. “Forget-Free Continual Learning with Winning Subnetworks.” In International Conference on Machine Learning, 10734–50. PMLR.
Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, et al. 2017. “Overcoming Catastrophic Forgetting in Neural Networks.” Proceedings of the National Academy of Sciences 114 (13): 3521–26.
Li, Zhizhong, and Derek Hoiem. 2017. “Learning Without Forgetting.” IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (12): 2935–47.
Serra, Joan, Didac Suris, Marius Miron, and Alexandros Karatzoglou. 2018. “Overcoming Catastrophic Forgetting with Hard Attention to the Task.” In International Conference on Machine Learning, 4548–57. PMLR.
Wang, Pengxiang, Hongbo Bo, Jun Hong, Weiru Liu, and Kedian Mu. 2024. “AdaHAT: Adaptive Hard Attention to the Task in Task-Incremental Learning.” In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 143–60. Springer.
Experiment Index Config
CL Dataset
 
 

©️ 2025 Pengxiang Wang. All rights reserved.