Experiment Index Config
To run a custom experiment, you need to create a YAML file in experiment/
folder – the experiment index config. Below shows an example and the required fields of the experiment index config.
Example
configs/experiment/example_culmain_train.yaml
# @package _global_
# make sure to include the above commented global setting!
cl_paradigm: TIL
train_tasks: 5
eval_after_tasks: 5
unlearning_requests:
2: [1]
4: [2]
5: [5]
global_seed: 1
defaults:
- /cl_dataset: permuted_mnist.yaml
- /backbone: mlp.yaml
- /cl_algorithm: independent.yaml
- /unlearning_algorithm: independent_unlearn.yaml
- /optimizer: adam.yaml
- /lr_scheduler: reduce_lr_on_plateau.yaml
- /trainer: cpu.yaml
- /lightning_loggers: default.yaml
- /callbacks: default.yaml
- /hydra: default.yaml
- /misc: default.yaml
output_dir: outputs/cul_til_pmnist_independent/${misc.timestamp}
# overrides
trainer:
max_epochs: 2
Required Config Fields
Field | Description | Allowed Values |
---|---|---|
cl_paradigm |
The continual learning paradigm |
|
train_tasks |
The list of task IDs1 to train |
|
eval_after_tasks |
If task ID |
|
unlearning_requests |
The requested task IDs to learn after each training task, i.e. |
|
permanent_mark |
Whether a task is permanent for each task in the experiment. If a task is permanent, it will not be unlearned i.e. not shown in future unlearning requests. This applies to some unlearning algorithms that need to know whether a task is permanent. |
|
global_seed |
The global seed for the experiment. It helps reproduce the results |
|
/cl_dataset |
The continual learning dataset that the experiment is on |
|
/cl_algorithm |
The continual learning algorithm to experiment |
|
/unlearning_algorithm |
The unlearning algorithm to experiment |
|
/backbone |
The backbone network where the continual learning algorithm is based |
|
/optimizer |
The optimizer for each task |
|
/lr_scheduler |
The learning rate scheduler for each task |
|
/trainer |
The PyTorch Lightning Trainer object which contains all configs for training and test process |
|
/metrics |
The metrics to be monitored, logged or visualized |
|
/lightning_loggers |
The Lightning Loggers used to log metrics and results. Please refer to Output Results (CL Main) section |
|
/callbacks |
The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated in different timing of the experiment |
|
output_dir |
The folder name storing the experiment results. Please refer to Output Results (CL Main) section |
|
/hydra |
Configuration for Hydra itself |
|
/misc |
Miscellaneous configs that are less related to the experiment |
|
Note
The continual learning experiment run by clarena train culmain
is managed by a CULMainTrain
class. To learn how these fields work, please refer to its source code.
Footnotes
The task IDs
are integers ranging from 1 to number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎