Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Multi-Task Learning (MTL)
  2. Configure MTL Experiment
  3. Experiment Index Config
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference
  1. Multi-Task Learning (MTL)
  2. Configure MTL Experiment
  3. Experiment Index Config

Experiment Index Config (MTL)

Modified

August 16, 2025

To run a custom multi-task learning experiment, you need to create a YAML file in experiment/ folder – the experiment index config. Below shows an example and the required fields of the experiment index config.

Example

configs/experiment/example_mtl_train.yaml
# @package _global_
# make sure to include the above commented global setting!

train_tasks: 20
eval_tasks: 20
global_seed: 1


defaults:
  - /cl_dataset: split_mnist.yaml # convert from CL dataset
  - /mtl_algorithm: joint_learning.yaml
  - /backbone: mlp.yaml
  - /optimizer: adam.yaml
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  - /trainer: cpu.yaml
  - /metrics: mtl_default.yaml
  - /lightning_loggers: default.yaml
  - /callbacks: mtl_default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

trainer: 
  max_epochs: 2

output_dir: outputs/joint_til_pmnist_finetuning/${misc.timestamp}

Required Config Fields

Field Description Allowed Values
train_tasks The list of task IDs1 to train
  • List of integers tk: At least 1, no more than available number of tasks of MTL dataset: mtl_dataset.num_tasks
  • Integer T: Equivalent to list of integer [1,⋯,T]. At least 0, no more than available number of tasks of MTL dataset: mtl_dataset.num_tasks
eval_tasks The list of task IDs to evaluate after training
  • List of integers tk or integer N, work the same as train_tasks
  • Must be the subset of train_tasks
global_seed The global seed for the experiment. It helps reproduce the results
  • Same as seed argument in lightning.seed_everything
/mtl_dataset The multi-task dataset that the experiment is on
  • Choose from sub-config YAML files in mtl_dataset/ folder
  • Please refer to Configure MTL Dataset (MTL) section
  • If this field is included, do not include /cl_dataset
/cl_dataset Convert a continual learning dataset into a multi-task dataset. Please refer to Configure MTL Dataset section for how it works
  • Choose from sub-config YAML files in cl_dataset/ folder
  • Please refer to Configure CL Dataset (CL Main) section in the continual learning counterpart
  • If this field is included, do not include /mtl_dataset
/mtl_algorithm The multi-task learning algorithm to experiment
  • Choose from sub-config YAML files in mtl_algorithm/ folder
  • Please refer to Configure MTL Algorithm (MTL) section
/backbone The backbone network where the multi-task learning algorithm is based
  • Choose from sub-config YAML files in backbone/ folder
  • Please refer to Configure Backbone Network (MTL) section
/optimizer The optimizer for learning multiple tasks
  • Choose from sub-config YAML files in optimizer/ folder
  • Please refer to Configure Optimizer (MTL) section
/lr_scheduler The learning rate scheduler for learning multiple tasks
  • Choose from sub-config YAML files in lr_scheduler/ folder
  • Please refer to Configure Learning Rate Scheduler (MTL) section
/trainer The PyTorch Lightning Trainer object which contains all configs for training and test process
  • Choose from sub-config YAML files in trainer/ folder
  • Please refer to Configure Trainer (MTL) section
/metrics The metrics to be monitored, logged or visualized
  • Choose from sub-config YAML files in metrics/ folder
  • Please refer to Configure Metrics (MTL) section
/lightning_loggers The Lightning Loggers used to log metrics and results
  • Choose from sub-config YAML files in lightning_loggers/ folder
  • This works the same as Lightning Loggers in continual learning. Please refer to Configure Lightning Loggers (CL Main) section in the continual learning counterpart
/callbacks The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated in different timing of the experiment
  • Choose from sub-config YAML files in callbacks/ folder
  • Please refer to Configure Callbacks (MTL) section
output_dir The folder name storing the experiment results. Please refer to Output Results (MTL) section
  • Relative path to where you run the clarena train mtl command
  • We recommend including timestamps in the path to distinguish multiple runs, e.g. outputs/mtl_smnist_jointlearning/${misc.timestamp}.. Please refer to Other Configs section in the continual learning counterpart
/hydra Configuration for Hydra itself
  • Choose from sub-config YAML files in hydra/ folder
  • This works the same as continual learning. Please refer to Please refer to Other Configs section in the continual learning counterpart
/misc Miscellaneous configs that are less related to the experiment
  • Choose from sub-config YAML files in misc/ folder
  • This works the same as continual learning. Please refer to Please refer to Other Configs section in the continual learning counterpart
Note

The continual learning experiment run by clarena train mtl is managed by a MTLTrain class. To learn how these fields work, please refer to its source code.

Back to top

Footnotes

  1. The task IDs are integers starting from 1, ending with number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎

Configure MTL Experiment
MTL Algorithm
 
 

©️ 2025 Pengxiang Wang. All rights reserved.