Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧠 Q&A Knowledge Base
    • 💻 LeetCode Notes
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Music Library
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
    • AmnesiacHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Multi-Task Learning (MTL)
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Custom Implementation
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Callback
  • API Reference
  • FAQs

On this page

  • Definition
  • Supported Pipelines

Multi-Task Learning (MTL)

Multi-Task Learning (MTL) is a machine learning paradigm that involves training multiple tasks together on multiple datasets. Note that we refer to the MIMO (Multiple Input Multiple Output) scenario instead of SIMO (Single Input Multiple Output), where the latter is actually multi-label learning. Please refer to my article about multi-task learning.

Definition

In CLArena, multi-task learning is specifically designed for classification problems and follows the formal definition below.

Definition 1 (Multi-Task Learning Classification) Given:

  • An initialized neural network model \(f\) consisting of:
    • A shared backbone network \(B\)
    • Task-specific output heads (independent with each other)
  • A set of tasks \(\mathcal{T} = \{t_1, t_2, \cdots, t_K\}\) available simultaneously with task IDs \(t_k \in \mathbb{N}^+\).
  • For each task \(t\), we have:
    • Training data: \(\mathcal{D}_{\text{train}}^{(t)} = \{(\mathbf{x}_i,y_i)\}_{i=1}^{N_t} \in (\mathcal{X}^{(t)},\mathcal{Y}^{(t)})\)
    • Validation data: \(\mathcal{D}_{\text{val}}^{(t)} \in (\mathcal{X}^{(t)},\mathcal{Y}^{(t)})\)
    • Test data: \(\mathcal{D}_{\text{test}}^{(t)} \in (\mathcal{X}^{(t)},\mathcal{Y}^{(t)})\)

Objective: Develop an algorithm that jointly trains the model \(f\) across all tasks in \(\mathcal{T}\), such that:

  • Perform well on test datasets of all tasks: \(\mathcal{D}_{\text{test}}^{(t_1)}, \cdots, \mathcal{D}_{\text{test}}^{(t_K)}\)

Supported Pipelines

CLArena supports the following experiment and evaluation pipelines for multi-task learning:

  • Multi-Task Learning Experiment: The experiment for training and evaluating multi-task learning models. See Multi-Task Learning Experiment.
    • Multi-Task Learning Evaluation: The evaluation phase for assessing the performance of the trained multi-task learning models. See Save and Evaluate Model.
Back to top
Output Results
MTL Experiment
 
 

©️ 2026 Pengxiang Wang. All rights reserved.