Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 💻 LeetCode Notes
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
    • AmnesiacHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Multi-Task Learning (MTL)
  2. MTL Experiment
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Custom Implementation
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Callback
  • API Reference
  • FAQs

On this page

  • Experiment Pipeline
  • Running
  • Configuration
    • Example
  • Required Config Fields
  1. Multi-Task Learning (MTL)
  2. MTL Experiment

Multi-Task Learning Experiment

Modified

October 9, 2025

Multi-task learning experiment is the experiment in CLArena for training and evaluating multi-task learning algorithms. This section defines its pipeline and guides you through configuring custom experiments.

Experiment Pipeline

For all tasks t1,…,tK specified in the configuration:

  1. Training Phase
  • The model is jointly trained on all tasks’ training data ⋃kDtrain(tk) for the specified number of epochs
  • The multi-task learning algorithm’s mechanisms are applied during this phase
  • Model parameters are updated to maximize intertask knowledge transfer
  1. Validation Phase
  • At the end of each training epoch, the model is validated on ⋃kDval(tk)
  • Validation results can optionally guide model selection and hyperparameter tuning
  1. Evaluation Phase
  • After completing training and validation, the model is evaluated on the test datasets of all tasks: ⋃kDtest(tk) (this is customizable, see eval_tasks in required config fields)
  • The evaluation assesses both task-specific and overall performance

Running

To run a multi-task learning experiment, specify the MTL_EXPR indicator in the command:

clarena pipeline=MTL_EXPR index=<index-config-name>

Configuration

To run a custom multi-task learning experiment, create a YAML file in the index/ folder as index config. Below is an example.

Example

example_configs/index/example_mtl_expr.yaml
# @package _global_
# make sure to include the above commented global setting!

# pipeline info
pipeline: MTL_EXPR
expr_name: example_mtl_expr
train_tasks: 5
eval_tasks: 5
global_seed: 1

# components
defaults:
  - /mtl_dataset: from_cl_split_mnist.yaml
  - /mtl_algorithm: joint_learning.yaml
  - /backbone: mlp.yaml
  - /optimizer: adam.yaml
  - /lr_scheduler: reduce_lr_on_plateau.yaml
  - /trainer: cpu.yaml
  - /metrics: mtl_default.yaml
  - /lightning_loggers: default.yaml
  - /callbacks: mtl_default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

# outputs
output_dir: outputs/${expr_name}/${misc.timestamp}

# overrides
trainer: 
  max_epochs: 2

Required Config Fields

Below is the list of required config fields for the index config of multi-task learning experiment.

Field Description Allowed Values
pipeline The default pipeline that clarena use the config to run
  • Choose from supported pipeline indicators
  • Only MTL_EXPR is allowed
expr_name The name of the experiment
  • A string value
train_tasks The list of task IDs1 to train
  • List of integers tk: At least 1, no more than available number of tasks of MTL dataset
  • Integer T: Equivalent to list of integer [1,⋯,T]. At least 0, no more than available number of tasks of MTL dataset
eval_tasks The list of task IDs to evaluate
  • List of integers tk or integer N, work the same as train_tasks
  • Must be the subset of train_tasks
global_seed The global seed for the entire experiment
  • Same as seed argument in lightning.seed_everything()
/mtl_dataset The multi-task learning dataset on which the experiment is conducted
  • Choose from sub-config YAML files in mtl_dataset/ folder
  • See Configure MTL Dataset
/mtl_algorithm The multi-task learning algorithm
  • Choose from sub-config YAML files in mtl_algorithm/ folder
  • See Configure MTL Algorithm
/backbone The backbone network on which the multi-task learning algorithm is based
  • Choose from sub-config YAML files in backbone/ folder
  • See Configure Backbone Network
/optimizer The optimizer for learning multiple tasks
  • Choose from sub-config YAML files in optimizer/ folder
  • See Configure Optimizer
/lr_scheduler The learning rate scheduler for learning multiple tasks
  • Choose from sub-config YAML files in lr_scheduler/ folder
  • See Configure Learning Rate Scheduler
/trainer The PyTorch Lightning Trainer object that contains all configs for training, validation and test process
  • Choose from sub-config YAML files in trainer/ folder
  • See Configure Trainer
/metrics The metrics to be monitored, logged or visualized
  • Choose from sub-config YAML files in metrics/ folder
  • See Configure Metrics
/lightning_loggers The Lightning Loggers used to log metrics and results
  • Choose from sub-config YAML files in lightning_loggers/ folder
  • See Configure Lightning Loggers
/callbacks The callbacks applied to this experiment (other than metric callbacks). Callbacks are additional actions integrated in different points during the experiment
  • Choose from sub-config YAML files in callbacks/ folder
  • See Configure Callbacks
/hydra Configuration for Hydra
  • Choose from sub-config YAML files in hydra/ folder
  • See Other Configs
/misc Miscellaneous configurations that are less related to the experiment
  • Choose from sub-config YAML files in misc/ folder
  • See Other Configs
output_dir The folder storing the experiment results
  • Relative path to where you run the clarena command
  • We recommend including experiment names and timestamps in the path to distinguish multiple runs, e.g. outputs/{expr_name}/${misc.timestamp}
Note

The multi-task learning experiment is managed by a MTLExperiment class. To learn how these fields work, please refer to its source code.

Back to top

Footnotes

  1. The task IDs are integers starting from 1, ending with number of tasks of the MTL dataset. Each corresponds to a task-specific dataset in the MTL dataset.↩︎

Multi-Task Learning (MTL)
Save and Evaluate Model
 
 

©️ 2025 Pengxiang Wang. All rights reserved.