Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Single-Task Learning (STL)
  2. Configure STL Experiment
  3. Trainer
  • Welcome to CLArena
  • Get Started
  • Continual Learning (CL)
    • Configure CL Main Experiment
      • Experiment Index Config
      • CL Algorithm
      • CL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Lightning Loggers
      • Callbacks
      • Other Configs
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • Configure CUL Main Experiment
      • Experiment Index Config
      • Unlearning Algorithm
      • Callbacks
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • Configure MTL Experiment
      • Experiment Index Config
      • MTL Algorithm
      • MTL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • Configure STL Experiment
      • Experiment Index Config
      • STL Dataset
      • Backbone Network
      • Optimizer
      • Learning Rate Scheduler
      • Trainer
      • Metrics
      • Callbacks
    • Save and Evaluate Model
    • Output Results
  • Implement Your Modules (TBC)
  • API Reference

On this page

  • Example
  • Required Config Fields
  1. Single-Task Learning (STL)
  2. Configure STL Experiment
  3. Trainer

Configure Trainer (STL)

Modified

August 16, 2025

Under the framework of PyTorch Lightning, we use the Lightning Trainer object for all configs related to training process, such as number of epochs, training strategy, device, etc.

Trainer is a sub-config under the experiment index config (STL). To configure a custom trainer, you need to create a YAML file in trainer/ folder. Below shows an example of the Trainer config.

Example

configs
├── __init__.py
├── entrance.yaml
├── experiment
│   ├── example_stl_train.yaml
│   └── ...
├── trainer
│   └── cpu.yaml
...
configs/experiment/example_stl_train.yaml
defaults:
  ...
  - /trainer: cpu.yaml
  ...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2

Required Config Fields

The _target_: lightning.Trainer field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available argument (called trainer flags) of lightning.Trainer.

PyTorch Lightning Documentation (Trainer)

Back to top
Learning Rate Scheduler
Metrics
 
 

©️ 2025 Pengxiang Wang. All rights reserved.