Shawn’s Blog
  • πŸ—‚οΈ Collections
    • πŸ–₯️ Slides Gallery
    • πŸ§‘β€πŸ³οΈ Cooking Ideas
    • 🍱 Cookbook
    • πŸ’¬ Language Learning
    • 🎼 Songbook
  • βš™οΈ Projects
    • βš› Continual Learning Arena
  • πŸ“„ Papers
    • AdaHAT
    • FG-AdaHAT
  • πŸŽ“ CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Single-Task Learning (STL)
  2. Configure STL Experiment
  3. Metrics
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference

On this page

  • Example
  • Supported Metrics & Required Config Fields
    • General
  1. Single-Task Learning (STL)
  2. Configure STL Experiment
  3. Metrics

Configure Metrics (STL)

Modified

August 16, 2025

Metrics are used to monitor training and validation process, and evaluate the model and algorithm during testing process.

Under the framework of PyTorch Lightning, callbacks are used to add additional actions and functionalities integrated in different timing of the experiment, which includes before, during, or after training, validating, or testing process. The metrics in our packages are implemented as metric callbacks, which can do:

  • Calculate metrics and save their data to files.
  • Visualize metrics as plots from the saved data.
  • Log additional metrics during training process. (Note the majority of training metrics are handled by Lightning Loggers. See Configure Lightning Loggers section)

The details of the actions can be configured by the metric callbacks. Each group of metrics is organized as one metric callback, for example, STLAccuracy and STLLoss correspond to accuracy and loss metrics of single-task learning. We can apply multiple metrics at the same time.

Metrics is a sub-config under the experiment index config (STL). To configure custom metrics, you need to create a YAML file in metrics/ folder. At the moment, we only support uniform metrics across all tasks. Below shows examples of the metrics config.

Example

configs
β”œβ”€β”€ __init__.py
β”œβ”€β”€ entrance.yaml
β”œβ”€β”€ experiment
β”‚   β”œβ”€β”€ example_stl_train.yaml
β”‚   └── ...
β”œβ”€β”€ metrics
β”‚   β”œβ”€β”€ stl_default.yaml
...
configs/experiment/example_stl_train.yaml
defaults:
  ...
  - /metrics: stl_default.yaml
  ...

The metrics config is a list of metric callback objects:

configs/metrics/mtl_default.yaml
- _target_: clarena.metrics.STLAccuracy
  save_dir: ${output_dir}/results/
  test_acc_csv_name: acc.csv
- _target_: clarena.metrics.STLLoss
  save_dir: ${output_dir}/results/
  test_loss_cls_csv_name: loss_cls.csv

Supported Metrics & Required Config Fields

In CLArena, we implemented many metric callbacks in clarena.metrics module that you can use for STL experiment.

The _target_ field of each callback must be assigned to the corresponding class name, such as clarena.metrics.STLAccuracy for STLAccuracy. Each metric callback has its own required fields, which are the same as the arguments of the class specified by _target_. The arguments of each metric callback class can be found in API documentation.

API Reference (Metrics) Source Code (Metrics)

Below is the full list of supported metric callbacks. These callbacks can only be applied to STL experiment. Note that the β€œMetric Callback” is exactly the class name that the _target_ field is assigned.

General

These metrics can be generally used unless noted otherwise.

Callback Description Required Config Fields
STLAccuracy

Provides all actions that are related to STL accuracy metric, which include:

  • Defining, initializing and recording accuracy metric.
  • Logging training and validation accuracy metric to Lightning loggers in real time.
  • Saving test accuracy metric to files.

The callback is able to produce the following outputs:

  • CSV files for test accuracy.
Same as STLAccuracy class arguments
STLLoss

Provides all actions that are related to STL loss metrics, which include:

  • Defining, initializing and recording loss metrics.
  • Logging training and validation loss metrics to Lightning loggers in real time.
  • Saving test loss metrics to files.

The callback is able to produce the following outputs:

  • CSV files for test classification loss.
Same as STLLoss class arguments
Back to top
Trainer
Callbacks
 
 

©️ 2025 Pengxiang Wang. All rights reserved.