Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Continual Unlearning (CUL)
  2. Configure CUL Main Experiment
  3. Experiment Index Config
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference
  1. Continual Unlearning (CUL)
  2. Configure CUL Main Experiment
  3. Experiment Index Config

Experiment Index Config

Modified

August 16, 2025

To run a custom experiment, you need to create a YAML file in experiment/ folder – the experiment index config. Below shows an example and the required fields of the experiment index config.

Example

configs/experiment/example_culmain_train.yaml
# @package _global_
# make sure to include the above commented global setting!

cl_paradigm: TIL
train_tasks: 5
eval_after_tasks: 5
unlearning_requests: 
  2: [1]
  4: [2]
  5: [5]
global_seed: 1


defaults:
  - /cl_dataset: permuted_mnist.yaml
  - /backbone: mlp.yaml
  - /cl_algorithm: independent.yaml
  - /unlearning_algorithm: independent_unlearn.yaml
  - /optimizer: adam.yaml
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  - /trainer: cpu.yaml
  - /lightning_loggers: default.yaml
  - /callbacks: default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

output_dir: outputs/cul_til_pmnist_independent/${misc.timestamp}

# overrides
trainer: 
  max_epochs: 2

Required Config Fields

Field Description Allowed Values
cl_paradigm The continual learning paradigm
  • ‘TIL’: for Task-Incremental Learning (TIL)
  • ‘CIL’: for Class-Incremental Learning (CIL)
train_tasks The list of task IDs1 to train
  • List of integers tk: At least 1, no more than available number of tasks of CL dataset: cl_dataset.num_tasks
  • Integer T: Equivalent to list of integer [1,⋯,T]. At least 0, no more than available number of tasks of CL dataset: cl_dataset.num_tasks
eval_after_tasks If task ID t is in this list, run the evaluation process of all seen tasks after training task t
  • List of integers tk or integer T, work the same as train_tasks
  • Must be the subset of train_tasks
unlearning_requests The requested task IDs to learn after each training task, i.e. u(t).
  • Dictionary: keys are the tasks having unlearning requests, and values are their requested task IDs.
permanent_mark Whether a task is permanent for each task in the experiment. If a task is permanent, it will not be unlearned i.e. not shown in future unlearning requests. This applies to some unlearning algorithms that need to know whether a task is permanent.
  • Dictionary: keys are task IDs, and values are bool values
global_seed The global seed for the experiment. It helps reproduce the results
  • Same as seed argument in lightning.seed_everything
/cl_dataset The continual learning dataset that the experiment is on
  • Choose from sub-config YAML files in cl_dataset/ folder
  • This works the same as CL dataset in continual learning. Please refer to Configure CL Dataset (CL Main) section
/cl_algorithm The continual learning algorithm to experiment
  • Choose from sub-config YAML files in cl_algorithm/ folder
  • This works the same as CL algorithm in continual learning. Please refer to Configure CL Algorithm (CL Main) section
/unlearning_algorithm The unlearning algorithm to experiment
  • Choose from sub-config YAML files in unlearning_algorithm/ folder
  • Please refer to Configure Unlearning Algorithm (CUL Main) section
/backbone The backbone network where the continual learning algorithm is based
  • Choose from sub-config YAML files in backbone/ folder
  • This works the same as backbone network in continual learning. Please refer to Configure Backbone Network (CL Main) section
/optimizer The optimizer for each task
  • Choose from sub-config YAML files in optimizer/ folder
  • This works the same as optimizer in continual learning. Please refer to Configure Optimizer(s) (CL Main) section
/lr_scheduler The learning rate scheduler for each task
  • Choose from sub-config YAML files in lr_scheduler/ folder
  • This works the same as learning rate scheduler in continual learning. Please refer to Configure Learning Rate Scheduler(s) (CL Main) section
/trainer The PyTorch Lightning Trainer object which contains all configs for training and test process
  • Choose from sub-config YAML files in trainer/ folder
  • This works the same as Trainer in continual learning. Please refer to Configure Trainer(s) (CL Main) section
/metrics The metrics to be monitored, logged or visualized
  • Choose from sub-config YAML files in metrics/ folder
  • This works the same as metrics in continual learning, as unlearning metrics all require reference experiments. Please refer to Configure Metrics (CL Main) section
/lightning_loggers The Lightning Loggers used to log metrics and results. Please refer to Output Results (CL Main) section
  • Choose from sub-config YAML files in lightning_loggers/ folder
  • This works the same as Lightning Loggers in continual learning. Please refer to Configure Lightning Loggers (CL Main) section
/callbacks The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated in different timing of the experiment
  • Choose from sub-config YAML files in callbacks/ folder
  • Please refer to Configure Callbacks (CUL Main) section
output_dir The folder name storing the experiment results. Please refer to Output Results (CL Main) section
  • Relative path to where you run the clarena train culmain command
  • We recommend including timestamps in the path to distinguish multiple runs, e.g. outputs/cul_til_pmnist_independent/${misc.timestamp}. Please refer to Other Configs section
/hydra Configuration for Hydra itself
  • Choose from sub-config YAML files in hydra/ folder
  • Please refer to Other Configs section
/misc Miscellaneous configs that are less related to the experiment
  • Choose from sub-config YAML files in misc/ folder
  • Please refer to Other Configs section
Note

The continual learning experiment run by clarena train culmain is managed by a CULMainTrain class. To learn how these fields work, please refer to its source code.

Back to top

Footnotes

  1. The task IDs tk are integers ranging from 1 to number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎

Configure CUL Main Experiment
Unlearning Algorithm
 
 

©️ 2025 Pengxiang Wang. All rights reserved.