Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Continual Unlearning (CUL)
  2. Full Experiment
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference

On this page

  • Usage of clarena train culrefretrain
  • Usage of clarena train culreforiginal
  • Usage of clarena eval cul
    • Example
    • Required Config Fields
  • Full CUL Experiment in One Command
  1. Continual Unlearning (CUL)
  2. Full Experiment

Full Continual Unlearning Experiment

Modified

August 16, 2025

The experiment in Configure CUL Main Experiment section is continual unlearning main experiment (CUL Main). The evaluation of continual unlearning main experiment can only produce basic results, which include the evaluation of the objective of continual learning, not the objective of unlearning. To get full results of evaluation including the latter, we need run additional reference experiment, and use their results to evaluate:

  • Retraining: Retrain continual learning from scratch without tasks requested to unlearn. It is used to calculate the unlearning metric Distribution Distance (DD).
  • Original: Retrain continual learning from scratch including all tasks. It is used to calculate the metric Accuracy Difference (AD).

These reference models are trained and evaluated by the following 3 commands:

  • clarena train culrefretrain
  • clarena train culreforiginal

then passed to a evaluation command clarena eval cul that calculate the related metrics using the results of both CUL main and reference experiments. These processes can be combined into one single command of the full continual unlearning experiment. We introduce them below.

Usage of clarena train culrefretrain

This command construct a corresponding retraining experiment from the specified continual unlearning main experiment config:

clrun train clrefretrain experiment=<CULMain-experiment-name>

This will preprocess the CUL Main experiment config into joint learning experiment config, which involves:

  • Set the output_dir as subfolder clrefretrain under the CUL Main experiment output directory.
  • Exclude the unlearned tasks (as shown in unlearning_requests) in the field train_tasks and eval_after_tasks.
  • Remove fields related to unlearning, such as unlearning_requests, /unlearning_algorithm.
  • Switch /callbacks from cul_default.yaml to cl_default.yaml.

For details, please check the source code.

Usage of clarena train culreforiginal

This command construct a corresponding retraining experiment from the specified continual unlearning main experiment config:

clrun train culreforiginal experiment=<CULMain-experiment-name>

This will preprocess the CUL Main experiment config into joint learning experiment config, which involves:

  • Set the output_dir as subfolder culreforiginal under the CUL Main experiment output directory.
  • Remove fields related to unlearning, such as unlearning_requests, /unlearning_algorithm.
  • Switch /callbacks from cul_default.yaml to cl_default.yaml.

For details, please check the source code.

Usage of clarena eval cul

This command run a continual unlearning full evaluation experiment, which evaluates model trained from the main experiment and reference experiment on the CL test dataset, and calculate unlearning metrics:

  • Evaluate Distribution Distance (DD) using the model trained from CUL Main experiment and the model trained from reference retraining experiment, on the CL test dataset. Save the data and figures.
  • Evaluate Accuracy Difference (AD) using the model trained from CUL Main experiment and the model trained from reference original experiment, on the CL test dataset. Save the data and figures.

Below shows an example and the required fields of its experiment index config.

Example

configs/experiment/example_cul_eval.yaml
# @package _global_
# make sure to include the above commented global setting!

main_model_path: outputs/example_culmain_train/2023-10-01_12-00-00/saved_models/model.pth
refretrain_model_path: outputs/example_culmain_train/2023-10-01_12-00-00/culrefretrain/results/model.pth
reforiginal_model_path: outputs/example_culmain_train/2023-10-01_12-00-00/culreforiginal/results/model.pth

dd_eval_tasks: 5
ad_eval_tasks: 5

cl_paradigm: TIL
global_seed: 1

defaults: 
  - /cl_dataset: permuted_mnist.yaml
  - /trainer: cpu.yaml
  - /metrics: cul_full_eval_default.yaml
  - /callbacks: cl_default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

output_dir: outputs/example_culmain_train/2023-10-01_12-00-00

Required Config Fields

Field Description Allowed Values
main_model_path The file path of the CUL Main model to evaluate
  • Relative path to where you run the clarena eval cul command
refretrain_model_path The file path of the reference retraining model
  • Relative path to where you run the clarena eval cul command
  • Optional. If not specified, evaluating DD will be skipped.
reforiginal_model_path The file path of the reference original model
  • Relative path to where you run the clarena eval cul command
  • Optional. If not specified, evaluating AD will be skipped.
dd_eval_tasks The list of task IDs1 that the metric DD is evaluated and averaged on
  • List of integers tk: At least 1.
  • Integer T: Equivalent to list of integer [1,⋯,T]. At least 0.
  • These tasks must exist in the accuracy CSV files to calculate from
  • Optional. When refretrain_model_path not provided, this field can be excluded.
ad_eval_tasks The list of task IDs2 that the metric AD is evaluated and averaged on
  • List of integers tk: At least 1.
  • Integer T: Equivalent to list of integer [1,⋯,T]. At least 0.
  • These tasks must exist in the accuracy CSV files to calculate from
  • Optional. When reforiginal_model_path not provided, this field can be excluded.
cl_paradigm The continual learning paradigm
  • ‘TIL’: for Task-Incremental Learning (TIL)
  • ‘CIL’: for Class-Incremental Learning (CIL)
global_seed The global seed for the experiment. It helps reproduce the results
  • Same as seed argument in lightning.seed_everything
/cl_dataset The original continual learning dataset that the model is evaluated on
  • Choose from sub-config YAML files in cl_dataset/ folder
  • Please refer to Configure CL Dataset (CL Main) section
/trainer The PyTorch Lightning Trainer object which contains all configs for testing process
  • Choose from sub-config YAML files in trainer/ folder
  • Please refer to Configure Trainer(s) (CL Main) section
/metrics The metrics to be monitored, logged or visualized
  • Choose from sub-config YAML files in metrics/ folder
  • Please refer to Configure Metrics (CL Main) section
/callbacks The callbacks applied to this evaluation experiment. Callbacks are additional actions integrated in different timing of the experiment
  • Choose from sub-config YAML files in callbacks/ folder
  • Please refer to Configure Callbacks (CL Main) section
output_dir The folder path storing the experiment results.
  • Relative path to where you run the clarena eval cul command
/hydra Configuration for Hydra itself
  • Choose from sub-config YAML files in hydra/ folder
  • Please refer to Other Configs section
/misc Miscellaneous configs that are less related to the experiment
  • Choose from sub-config YAML files in misc/ folder
  • Please refer to Other Configs section

The output results are summarized in Output Results (CUL) section.

Note

The experiment run by clarena eval cul is managed by a CULFullMetricsCalculation class. To learn how these fields work, please refer to its source code.

Full CUL Experiment in One Command

We integrate all the process above into one command as a full CUL experiment:

clarena run cul experiment=<experiment-name>

Note that this experiment config remains the same as the CUL Main experiment.

This effectively run these:

  • clarena train culmain experiment=<CULMain-experiment-name>
  • clarena train culrefretrain experiment=<CULMain-experiment-name>
  • clarena train culreforiginal experiment=<CULMain-experiment-name>
  • clarena eval cul experiment=<experiment-name> , where this experiment is a CUL full metrics calculation experiment, its config is constructed based on the above CUL main and reference experiments, detailed as follows (for details, please check the source code):
    • Align main_acc_csv_path with the path where the continual learning main experiment outputs the accuracy metrics data to.
    • Align refretrain_acc_csv_path with the path where the reference retraining experiment outputs the accuracy metrics data to.
    • Align reforiginal_acc_csv_path with the path where the reference original experiment outputs the accuracy metrics data to.
    • Set output_dir and all save_dir as the same as the CUL main experiment output directory, so that all the results are saved in the same folder.
    • Set dd_csv_name, ad_csv_name, dd_plot_name, ad_plot_name as the default names.

The full output results are summarized in Output Results (CUL) section.

Back to top

Footnotes

  1. The task IDs are integers starting from 1, ending with number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎

  2. The task IDs are integers starting from 1, ending with number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎

Callbacks
Output Results
 
 

©️ 2025 Pengxiang Wang. All rights reserved.