Experiment Index Config (MTL)
To run a custom multi-task learning experiment, you need to create a YAML file in experiment/
folder – the experiment index config. Below shows an example and the required fields of the experiment index config.
Example
configs/experiment/example_mtl_train.yaml
# @package _global_
# make sure to include the above commented global setting!
train_tasks: 20
eval_tasks: 20
global_seed: 1
defaults:
- /cl_dataset: split_mnist.yaml # convert from CL dataset
- /mtl_algorithm: joint_learning.yaml
- /backbone: mlp.yaml
- /optimizer: adam.yaml
- /lr_scheduler: reduce_lr_on_plateau.yaml
- /trainer: cpu.yaml
- /metrics: mtl_default.yaml
- /lightning_loggers: default.yaml
- /callbacks: mtl_default.yaml
- /hydra: default.yaml
- /misc: default.yaml
trainer:
max_epochs: 2
output_dir: outputs/joint_til_pmnist_finetuning/${misc.timestamp}
Required Config Fields
Field | Description | Allowed Values |
---|---|---|
train_tasks |
The list of task IDs1 to train |
|
eval_tasks |
The list of task IDs to evaluate after training |
|
global_seed |
The global seed for the experiment. It helps reproduce the results |
|
/mtl_dataset |
The multi-task dataset that the experiment is on |
|
/cl_dataset |
Convert a continual learning dataset into a multi-task dataset. Please refer to Configure MTL Dataset section for how it works |
|
/mtl_algorithm |
The multi-task learning algorithm to experiment |
|
/backbone |
The backbone network where the multi-task learning algorithm is based |
|
/optimizer |
The optimizer for learning multiple tasks |
|
/lr_scheduler |
The learning rate scheduler for learning multiple tasks |
|
/trainer |
The PyTorch Lightning Trainer object which contains all configs for training and test process |
|
/metrics |
The metrics to be monitored, logged or visualized |
|
/lightning_loggers |
The Lightning Loggers used to log metrics and results |
|
/callbacks |
The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated in different timing of the experiment |
|
output_dir |
The folder name storing the experiment results. Please refer to Output Results (MTL) section |
|
/hydra |
Configuration for Hydra itself |
|
/misc |
Miscellaneous configs that are less related to the experiment |
|
Note
The continual learning experiment run by clarena train mtl
is managed by a MTLTrain
class. To learn how these fields work, please refer to its source code.
Footnotes
The task IDs are integers starting from 1, ending with number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎