Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Continual Learning (CL)
  2. Configure CL Main Experiment
  3. Experiment Index Config
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference
  1. Continual Learning (CL)
  2. Configure CL Main Experiment
  3. Experiment Index Config

Experiment Index Config (CL Main)

Modified

August 16, 2025

To run a custom continual learning main experiment (CL Main), you need to create a YAML file in the experiment/ folder – the experiment index config. Below are an example and the required fields for the experiment index config.

Example

configs/experiment/example_clmain_train.yaml
# @package _global_
# make sure to include the above commented global setting!

cl_paradigm: TIL
train_tasks: 10
eval_after_tasks: 10
global_seed: 1

defaults:
  - /cl_dataset: permuted_mnist.yaml
  - /backbone: clmlp.yaml
  - /cl_algorithm: finetuning.yaml
  - /optimizer: sgd.yaml
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  - /trainer: cpu.yaml
  - /metrics: cl_default.yaml
  - /lightning_loggers: default.yaml
  - /callbacks: cl_default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

output_dir: outputs/example_clmain_train/${misc.timestamp}

# overrides
trainer:
  max_epochs: 2

Required Config Fields

Field Description Allowed Values
cl_paradigm The continual learning paradigm
  • ‘TIL’: for Task-Incremental Learning (TIL)
  • ‘CIL’: for Class-Incremental Learning (CIL)
train_tasks The list of task IDs1 to train
  • List of integers tk: At least 1, no more than the available number of tasks of CL dataset: cl_dataset.num_tasks
  • Integer T: Equivalent to list of integers [1,⋯,T]. At least 0 (no task to train), no more than the available number of tasks of CL dataset: cl_dataset.num_tasks
eval_after_tasks If task ID t is in this list, run the evaluation process for all seen tasks after training task t
  • List of integers tk or integer T, works the same as train_tasks
  • Must be a subset of train_tasks
global_seed The global seed for the entire experiment
  • Same as the seed argument in lightning.seed_everything
/cl_dataset The continual learning dataset on which the experiment is conducted
  • Choose from sub-config YAML files in the cl_dataset/ folder
  • Please refer to Configure CL Dataset (CL Main) section
/cl_algorithm The continual learning algorithm to be experimented with
  • Choose from sub-config YAML files in the cl_algorithm/ folder
  • Please refer to Configure CL Algorithm (CL Main) section
/backbone The backbone network on which the continual learning algorithm is based
  • Choose from sub-config YAML files in the backbone/ folder
  • Please refer to Configure Backbone Network (CL Main) section
/optimizer The optimizer for each task
  • Choose from sub-config YAML files in the optimizer/ folder
  • Please refer to Configure Optimizer(s) (CL Main) section
/lr_scheduler The learning rate scheduler for each task
  • Choose from sub-config YAML files in the lr_scheduler/ folder
  • Please refer to Configure Learning Rate Scheduler(s) (CL Main) section
/trainer The PyTorch Lightning Trainer object that contains all configurations for the training and testing process
  • Choose from sub-config YAML files in the trainer/ folder
  • Please refer to Configure Trainer(s) (CL Main) section
/metrics The metrics to be monitored, logged, or visualized
  • Choose from sub-config YAML files in the metrics/ folder
  • Please refer to Configure Metrics (CL Main) section
/lightning_loggers The Lightning Loggers used to log metrics and results. Please refer to the Output Results (CL Main) section
  • Choose from sub-config YAML files in the lightning_loggers/ folder
  • Please refer to Configure Lightning Loggers (CL Main) section
/callbacks The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated at different points during the experiment
  • Choose from sub-config YAML files in the callbacks/ folder
  • Please refer to Configure Callbacks (CL Main) section
output_dir The folder name for storing the experiment results. Please refer to the Output Results (CL Main) section
  • Relative path to where you run the clarena train clmain command
  • We recommend including timestamps in the path to distinguish multiple runs, e.g. outputs/til_pmnist_finetuning/${misc.timestamp}. Please refer to the Other Configs section
/hydra Configuration for Hydra itself
  • Choose from sub-config YAML files in the hydra/ folder
  • Please refer to Other Configs section
/misc Miscellaneous configurations that are less related to the experiment
  • Choose from sub-config YAML files in the misc/ folder
  • Please refer to Other Configs section
Note

The continual learning main experiment run by clarena train clmain is managed by a CLMainTrain class. To learn how these fields work, please refer to its source code.

Back to top

Footnotes

  1. The task IDs tk are integers ranging from 1 to the number of tasks in the CL dataset. Each corresponds to a task-specific dataset within the CL dataset.↩︎

Configure CL Main Experiment
CL Algorithm
 
 

©️ 2025 Pengxiang Wang. All rights reserved.