Continual Unlearning Main Experiment
Continual learning main experiment is the experiment in CLArena in addition to continual learning main experiment with unlearning phase. This section defines its pipeline and guides you through configuring custom experiments.
Experiment Pipeline
For each task
- Training Phase
- The model is trained on current task data
for the specified number of epochs - The continual learning algorithm’s mechanisms are applied during this phase
- Model parameters are updated while attempting to preserve knowledge from previous tasks
- The model is trained on current task data
- Validation Phase
- At the end of each training epoch, the model is validated on
- Validation results can optionally guide model selection and hyperparameter tuning
- At the end of each training epoch, the model is validated on
- Unlearning Phase
- After training and validation, the model is unlearned on tasks
- The continual unlearning algorithm’s mechanisms are applied during this phase
- After training and validation, the model is unlearned on tasks
- Evaluation Phase
- After completing training, validation and unlearning, the model is evaluated on test datasets
- Testing occurs across all remaining tasks:
- The evaluation assesses both task-specific and overall performance. Unlearning performance cannot be evaluated with the main experiment only but a full experiment.
- Note: This phase may be skipped based on configuration settings (see
eval_after_tasks
in required config fields)
Running
To run a continual unlearning main experiment, specify the CUL_MAIN_EXPR
indicator in the command:
clarena pipeline=CUL_MAIN_EXPR index=<index-config-name>
Configuration
To run a custom continual unlearning main experiment, create a YAML file in the index/
folder as index config. Below is an example.
Example
configs/experiment/example_culmain_train.yaml
# @package _global_
# make sure to include the above commented global setting!
# pipeline info
pipeline: CUL_MAIN_EXPR
expr_name: example_cul_main_expr
train_tasks: 5
eval_after_tasks: 5
global_seed: 1
# paradigm settings
cl_paradigm: TIL
unlearning_requests:
2: [1]
4: [2]
5: [5]
# components
defaults:
- /cl_dataset: permuted_mnist.yaml
- /backbone: clmlp.yaml
- /cl_algorithm: unlearnable_independent.yaml
- /cul_algorithm: independent_unlearn.yaml
- /optimizer: adam.yaml
- /lr_scheduler: reduce_lr_on_plateau.yaml
- /trainer: cpu.yaml
- /metrics: cl_default.yaml
- /lightning_loggers: default.yaml
- /callbacks: cul_default.yaml
- /hydra: default.yaml
- /misc: default.yaml
# outputs
output_dir: outputs/${expr_name}/${misc.timestamp}
# overrides
trainer:
max_epochs: 2
Required Config Fields
Below is the list of required config fields for the index config of continual unlearning main experiment.
Field | Description | Allowed Values |
---|---|---|
pipeline |
The default pipeline that clarena use the config to run |
|
expr_name |
The name of the experiment |
|
train_tasks |
The list of task IDs1 to train |
|
eval_after_tasks |
If task ID |
|
global_seed |
The global seed for the entire experiment |
|
cl_paradigm |
The continual learning paradigm |
|
unlearning_requests |
The requested task IDs |
|
permanent_mark |
Whether a task is permanent for each task in the experiment. If a task is permanent, it will not be unlearned i.e. not shown in future unlearning requests. This applies to some unlearning algorithms that need to know whether a task is permanent |
|
/cl_dataset |
The continual learning dataset on which the experiment is conducted |
|
/cl_algorithm |
The continual learning algorithm |
|
/cul_algorithm |
The continual unlearning algorithm |
|
/backbone |
The backbone network on which the continual learning algorithm is based |
|
/optimizer |
The optimizer for each task |
|
/lr_scheduler |
The learning rate scheduler for each task |
|
/trainer |
The PyTorch Lightning Trainer object that contains all configurations for the training, validation and test process |
|
/metrics |
The metrics to be monitored, logged, or visualized |
|
/lightning_loggers |
The Lightning Loggers used to log metrics and results |
|
/callbacks |
The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated at different points during the experiment |
|
/hydra |
Configuration for Hydra |
|
/misc |
Miscellaneous configurations that are less related to the experiment |
|
output_dir |
The folder storing the experiment results |
|
The continual unlearning main experiment is managed by the CULMainExperiment
class. To learn how these fields work, please refer to its source code.
Footnotes
The task IDs
are integers ranging from 1 to number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎