Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Continual Unlearning (CUL)
  2. CUL Main Experiment
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Implement Your Modules (TBC)
  • API Reference

On this page

  • Experiment Pipeline
  • Running
  • Configuration
    • Example
    • Required Config Fields
  1. Continual Unlearning (CUL)
  2. CUL Main Experiment

Continual Unlearning Main Experiment

Modified

August 26, 2025

Continual learning main experiment is the experiment in CLArena in addition to continual learning main experiment with unlearning phase. This section defines its pipeline and guides you through configuring custom experiments.

Experiment Pipeline

For each task t specified in the configuration:

  1. Training Phase
    • The model is trained on current task data Dtrain(t) for the specified number of epochs
    • The continual learning algorithm’s mechanisms are applied during this phase
    • Model parameters are updated while attempting to preserve knowledge from previous tasks
  2. Validation Phase
    • At the end of each training epoch, the model is validated on Dval(t)
    • Validation results can optionally guide model selection and hyperparameter tuning
  3. Unlearning Phase
    • After training and validation, the model is unlearned on tasks u(t)
    • The continual unlearning algorithm’s mechanisms are applied during this phase
  4. Evaluation Phase
    • After completing training, validation and unlearning, the model is evaluated on test datasets
    • Testing occurs across all remaining tasks: R(t)={1,…,t}−U(t)
    • The evaluation assesses both task-specific and overall performance. Unlearning performance cannot be evaluated with the main experiment only but a full experiment.
    • Note: This phase may be skipped based on configuration settings (see eval_after_tasks in required config fields)

Running

To run a continual unlearning main experiment, specify the CUL_MAIN_EXPR indicator in the command:

clarena pipeline=CUL_MAIN_EXPR index=<index-config-name>

Configuration

To run a custom continual unlearning main experiment, create a YAML file in the index/ folder as index config. Below is an example.

Example

configs/experiment/example_culmain_train.yaml
# @package _global_
# make sure to include the above commented global setting!

# pipeline info
pipeline: CUL_MAIN_EXPR
expr_name: example_cul_main_expr
train_tasks: 5
eval_after_tasks: 5
global_seed: 1

# paradigm settings
cl_paradigm: TIL
unlearning_requests: 
  2: [1]
  4: [2]
  5: [5]

# components
defaults:
  - /cl_dataset: permuted_mnist.yaml
  - /backbone: clmlp.yaml
  - /cl_algorithm: unlearnable_independent.yaml
  - /cul_algorithm: independent_unlearn.yaml
  - /optimizer: adam.yaml
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  - /trainer: cpu.yaml
  - /metrics: cl_default.yaml
  - /lightning_loggers: default.yaml
  - /callbacks: cul_default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

# outputs
output_dir: outputs/${expr_name}/${misc.timestamp}

# overrides
trainer: 
  max_epochs: 2

Required Config Fields

Below is the list of required config fields for the index config of continual unlearning main experiment.

Field Description Allowed Values
pipeline The default pipeline that clarena use the config to run
  • Choose from supported pipeline indicators
  • CUL_MAIN_EXPR, CUL_REF_RETRAIN_EXPR, CUL_REF_ORIGINAL_EXPR, CUL_FULL_EXPR
expr_name The name of the experiment
  • A string value
train_tasks The list of task IDs1 to train
  • List of integers tk: At least 1, no more than available number of tasks of CL dataset
  • Integer T: Equivalent to list of integer [1,⋯,T]. At least 0, no more than available number of tasks of CL dataset
eval_after_tasks If task ID t is in this list, run the evaluation process of all seen tasks after training task t
  • List of integers tk or integer T, work the same as train_tasks
  • Must be the subset of train_tasks
global_seed The global seed for the entire experiment
  • Same as seed argument in lightning.seed_everything()
cl_paradigm The continual learning paradigm
  • ‘TIL’: Task-Incremental Learning (TIL)
  • ‘CIL’: Class-Incremental Learning (CIL)
unlearning_requests The requested task IDs u(t) to unlearn after each training task t
  • Dictionary: keys are the tasks having unlearning requests, and values are their requested task IDs
permanent_mark Whether a task is permanent for each task in the experiment. If a task is permanent, it will not be unlearned i.e. not shown in future unlearning requests. This applies to some unlearning algorithms that need to know whether a task is permanent
  • Dictionary: keys are task IDs, and values are bool values
/cl_dataset The continual learning dataset on which the experiment is conducted
  • Choose from sub-config YAML files in the cl_dataset/ folder
  • See Configure CL Dataset
/cl_algorithm The continual learning algorithm
  • Choose from sub-config YAML files in the cl_algorithm/ folder
  • See Configure CL Algorithm
/cul_algorithm The continual unlearning algorithm
  • Choose from sub-config YAML files in cul_algorithm/ folder
  • See Configure CUL Algorithm
/backbone The backbone network on which the continual learning algorithm is based
  • Choose from sub-config YAML files in the backbone/ folder
  • See Configure Backbone Network
/optimizer The optimizer for each task
  • Choose from sub-config YAML files in the optimizer/ folder
  • See Configure Optimizer(s)
/lr_scheduler The learning rate scheduler for each task
  • Choose from sub-config YAML files in the lr_scheduler/ folder
  • See Configure Learning Rate Scheduler(s)
/trainer The PyTorch Lightning Trainer object that contains all configurations for the training, validation and test process
  • Choose from sub-config YAML files in the trainer/ folder
  • See Configure Trainer(s)
/metrics The metrics to be monitored, logged, or visualized
  • Choose from sub-config YAML files in the metrics/ folder
  • See Configure Metrics
/lightning_loggers The Lightning Loggers used to log metrics and results
  • Choose from sub-config YAML files in the lightning_loggers/ folder
  • See Configure Lightning Loggers
/callbacks The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated at different points during the experiment
  • Choose from sub-config YAML files in the callbacks/ folder
  • See Configure Callbacks
/hydra Configuration for Hydra
  • Choose from sub-config YAML files in the hydra/ folder
  • See Other Configs
/misc Miscellaneous configurations that are less related to the experiment
  • Choose from sub-config YAML files in the misc/ folder
  • See Other Configs
output_dir The folder storing the experiment results
  • Relative path to where you run the clarena command
  • We recommend including experiment names and timestamps in the path to distinguish multiple runs, e.g. outputs/{expr_name}/${misc.timestamp}
Note

The continual unlearning main experiment is managed by the CULMainExperiment class. To learn how these fields work, please refer to its source code.

Back to top

Footnotes

  1. The task IDs tk are integers ranging from 1 to number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎

Continual Unlearning (CUL)
Full Experiment
 
 

©️ 2025 Pengxiang Wang. All rights reserved.