Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Components
  2. Trainer
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Implement Your Modules (TBC)
  • API Reference

On this page

  • Example
    • Uniform Trainer for All Tasks
    • Distinct Trainer for Each Task
  • Required Config Fields
  • Example
  • Required Config Fields
  • Example
  • Required Config Fields
  1. Components
  2. Trainer

Configure Trainer (STL)

Modified

August 26, 2025

Under the PyTorch Lightning framework, we use the Lightning Trainer object for all training-related configs, such as number of epochs, training strategy, device, etc.

Trainers are a sub-config under the experiment index config (CL Main). To configure a custom trainer, create a YAML file in the trainer/ folder. As continual learning involves multiple tasks, each task can be assigned a trainer. We support a uniform trainer across all tasks and distinct trainers for each task. Below are examples of both configurations.

Example

configs
├── __init__.py
├── entrance.yaml
├── experiment
│   ├── example_clmain_train.yaml
│   └── ...
├── trainer
│   └── cpu.yaml
...

Uniform Trainer for All Tasks

configs/experiment/example_clmain_train.yaml
defaults:
  ...
  - /trainer: cpu.yaml
  ...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2

Distinct Trainer for Each Task

Distinct trainers are specified as a list. The length of the list must match the train_tasks field in the experiment index config (CL Main). Below is an example for 10 tasks, where tasks 2 and 3 are using GPU while the rest are using CPU.

defaults:
  ...
  - /optimizer: 10_tasks.yaml
  ...
configs/trainer/10_tasks.yaml
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: gpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: gpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
  default_root_dir: ${output_dir}
  log_every_n_steps: 50
  accelerator: cpu
  devices: 1
  max_epochs: 2

Required Config Fields

The _target_: lightning.Trainer field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available arguments (trainer flags) of lightning.Trainer.

PyTorch Lightning Documentation (Trainer)

Under the framework of PyTorch Lightning, we use the Lightning Trainer object for all configs related to training process, such as number of epochs, training strategy, device, etc.

Trainer is a sub-config under the experiment index config (MTL). To configure a custom trainer, you need to create a YAML file in trainer/ folder. Below shows an example of the Trainer config.

Example

configs
├── __init__.py
├── entrance.yaml
├── experiment
│   ├── example_mtl_train.yaml
│   └── ...
├── trainer
│   └── cpu.yaml
...
configs/experiment/example_mtl_train.yaml
defaults:
  ...
  - /trainer: cpu.yaml
  ...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2

Required Config Fields

The _target_: lightning.Trainer field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available argument (called trainer flags) of lightning.Trainer.

PyTorch Lightning Documentation (Trainer)

Under the framework of PyTorch Lightning, we use the Lightning Trainer object for all configs related to training process, such as number of epochs, training strategy, device, etc.

Trainer is a sub-config under the experiment index config (STL). To configure a custom trainer, you need to create a YAML file in trainer/ folder. Below shows an example of the Trainer config.

Example

configs
├── __init__.py
├── entrance.yaml
├── experiment
│   ├── example_stl_train.yaml
│   └── ...
├── trainer
│   └── cpu.yaml
...
configs/experiment/example_stl_train.yaml
defaults:
  ...
  - /trainer: cpu.yaml
  ...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2

Required Config Fields

The _target_: lightning.Trainer field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available argument (called trainer flags) of lightning.Trainer.

PyTorch Lightning Documentation (Trainer)

Back to top
Learning Rate Scheduler
Metrics
 
 

©️ 2025 Pengxiang Wang. All rights reserved.