Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 💻 LeetCode Notes
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
    • CV (Mandarin, Long Version)
  • About
  1. Single-Task Learning (STL)
  2. Save and Evaluate Model
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Custom Implementation
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Callback
  • API Reference
  • FAQs

On this page

  • 1 Save Model
  • 2 Evaluate Model
    • Running
    • Configuration
    • Example
    • Required Config Fields
  1. Single-Task Learning (STL)
  2. Save and Evaluate Model

Save and Evaluate Model (STL)

Modified

October 6, 2025

CLArena supports saving the model after training each task and evaluating it separately for the single-task learning experiment.

1 Save Model

To save the model after training, enable the callback clarena.callbacks.SaveModels. Please refer to Configure Callbacks section.

Warning

Checkpointing is not used for saving models later for evaluation in CLArena. This is because the model class is needed to load checkpoints, while we expect evaluation model regardless of its type and setting. clarena.callbacks.SaveModels uses torch.save() so later evaluation can use torch.load() to load the model without specifying the model class.

2 Evaluate Model

Single-task learning evaluation pipeline evaluates the saved model trained from single-task learning experiment. Its output results are summarized in Output Results (STL).

Running

To run a single-task learning evaluation, specify the STL_EVAL indicator in the command:

clarena pipeline=STL_EVAL index=<index-config-name>

Configuration

To run a custom single-task learning evaluation, create a YAML file in the index/ folder as index config. Below is an example.

Example

example_configs/experiment/example_stl_eval.yaml
# @package _global_
# make sure to include the above commented global setting!

# pipeline info
pipeline: STL_EVAL
global_seed: 1

# evaluation target
model_path: outputs/example_stl_expr/2023-10-01_12-00-00/saved_models/stl_model.pth

# components
defaults:
  - /stl_dataset: mnist.yaml
  - /trainer: cpu_eval.yaml
  - /metrics: stl_default.yaml
  - /lightning_loggers: default.yaml
  - /callbacks: eval_default.yaml
  - /hydra: default.yaml
  - /misc: default.yaml

output_dir: outputs/example_stl_expr/2023-10-01_12-00-00/eval # output to the same folder as the experiment

Required Config Fields

Below is the list of required config fields for the index config of single-task learning evaluation.

Field Description Allowed Values
pipeline The default pipeline that clarena use the config to run
  • Choose from supported pipeline indicators
  • Only STL_EVAL is allowed
global_seed The global seed for the entire evaluation
  • Same as seed argument in lightning.seed_everything()
model_path The file path of the model to evaluate
  • Relative path to where you run the clarena command for the single-task learning experiment
/stl_dataset The single-task learning dataset that the model is evaluated on
  • Choose from sub-config YAML files in stl_dataset/ folder
  • See Configure STL Dataset
/trainer The PyTorch Lightning Trainer object which contains all configs for testing process
  • Choose from sub-config YAML files in trainer/ folder
  • See Configure Trainer
/metrics The metrics to be monitored, logged or visualized
  • Choose from sub-config YAML files in metrics/ folder
  • See Configure Metrics
/callbacks The callbacks applied to this evaluation experiment (other than metric callbacks). Callbacks are additional actions integrated at different points during the evaluation
  • Choose from sub-config YAML files in callbacks/ folder
  • See Configure Callbacks
/hydra Configuration for Hydra
  • Choose from sub-config YAML files in hydra/ folder
  • See Other Configs
/misc Miscellaneous configs that are less related to the experiment
  • Choose from sub-config YAML files in misc/ folder
  • See Other Configs
output_dir The folder storing the evaluation results
  • Relative path to where you run the clarena command
  • We recommend setting it to the output_dir of continual learning main experiment to be evaluated
Note

The single-task learning evaluation is managed by a STLEvaluation class. To learn how these fields work, please refer to its source code.

Back to top
STL Experiment
Output Results
 
 

©️ 2025 Pengxiang Wang. All rights reserved.