Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 💻 LeetCode Notes
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
    • CV (Mandarin, Long Version)
  • About
  1. Continual Learning (CL)
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Custom Implementation
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Callback
  • API Reference
  • FAQs

On this page

  • Definition
  • Supported Paradigms
    • Task-Incremental Learning (TIL)
    • Class-Incremental Learning (CIL)
  • Supported Pipelines

Continual Learning (CL)

Modified

October 6, 2025

Continual Learning (CL) is a machine learning paradigm that involves training on multiple tasks sequentially while maintaining performance across all previously learned tasks. Please refer to my article Continual Learning Beginners’ Guide.

Definition

In CLArena, continual learning is specifically designed for classification problems and follows the formal definition below.

Definition 1 (Continual Learning Classification) Given:

  • An initialized neural network model f consisting of:
    • A shared backbone network B
    • Task-specific output heads
  • Sequential tasks: t=t1,t2,… with task IDs tk∈N+
  • For each task t, we have:
    • Training data: Dtrain(t)={(xi,yi)}i=1Nt∈(X(t),Y(t))
    • Validation data: Dval(t)∈(X(t),Y(t))
    • Test data: Dtest(t)∈(X(t),Y(t))

Objective: Develop an algorithm that updates the model from f(t−1) to f(t) when learning task t, such that:

  • Only current task data Dtrain(t) and Dval(t) are accessible
  • Maintain good performance on test datasets of all seen tasks: Dtest(t1),⋯,Dtest(t)
Note

Continual learning by definition has no knowledge of future tasks, however, the task IDs to train is set beforehand in an experiment (see config field train_tasks in Experiment Index Config of CL_MAIN_EXPR). The continual learning algorithm must not access any information about future tasks during training, for example, allocate space preemptively.

Supported Paradigms

CLArena supports two primary continual learning paradigms for classification:

Task-Incremental Learning (TIL)

In TIL, each new task introduces an independent classification head. When a new task arrives, the model adds a separate output head that operates independently from previous tasks.

Class-Incremental Learning (CIL)

In CIL, new classes are incrementally added to a single, growing classification head. The output head concatenates new classes to the existing classification space.

The key architectural difference between these paradigms lies in their output head structure, as illustrated below. For detailed explanations, see my continual learning beginners’ guide and the source code.

Figure 1: Architectural comparison of TIL and CIL output heads.

Supported Pipelines

CLArena supports the following experiment and evaluation pipelines for continual learning:

  • Continual Learning Main Experiment: The primary experiment for training and evaluating continual learning models. See Continual Learning Main Experiment.
  • Continual Learning Main Evaluation: The evaluation phase for assessing the performance of the trained continual learning models. See Save and Evaluate Model.
  • Continual Learning Full Experiment: A comprehensive experiment that evaluates more metrics for continual learning. This involves the main experiment, additional reference experiments and continual learning full evaluation based on these results. See Continual Learning Full Experiment.
  • Reference Joint Learning Experiment (Continual Learning): The reference joint learning experiment for full evaluation. See Reference Joint Learning Experiment.
  • Reference Independent Learning Experiment (Continual Learning): The reference independent learning experiment for full evaluation. See Reference Independent Learning Experiment.
  • Reference Random Learning Experiment (Continual Learning): The reference random learning experiment for full evaluation. See Reference Random Learning Experiment.
  • Continual Learning Full Evaluation: The evaluation phase of the full experiment. See Full Evaluation.
Back to top
Configure Pipelines
CL Main Experiment
 
 

©️ 2025 Pengxiang Wang. All rights reserved.