Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 📚 Library (Reading Notes)
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. CLArena
  2. Configure Your Experiment
  3. Configure Trainer
  • CLArena
    • Welcome Page
    • Get Started
    • Configure Your Experiment
      • Experiment Index Config
      • Configure CL Algorithm
      • Configure CL Dataset
      • Configure Backbone Network
      • Configure Optimizer
      • Configure Learning Rate Scheduler
      • Configure Trainer
      • Configure Lightning Loggers
      • Configure Callbacks
      • Other Configs
    • Implement Your CL Modules
    • API Reference

On this page

  • Configure Uniform Trainer For All Tasks
  • Configure Distinct Trainer For Each Task
  • Supported Trainer Options
  1. CLArena
  2. Configure Your Experiment
  3. Configure Trainer

Configure Trainer

Under the framework of PyTorch Lightning, we use the Lightning Trainer object for all options related to training process, such as number of epochs, training strategy, device, etc.

Just like optimizer, as continual learning involves multiple tasks, each task is supposed to be given a trainer for training. We can either use a uniform trainer across all tasks or assign distinct optimizer to each task.

Configure Uniform Trainer For All Tasks

To configure uniform trainer for your experiment, link the /trainer field in the experiment index config to a YAML file in trainer/ subfolder of your configs. That YAML file should use _target_ field to link to the lightning.Trainer class always and specify its arguments in the following field. Here is an example:

./clarena/example_configs
├── __init__.py
├── entrance.yaml
├── experiment
│   ├── example.yaml
│   └── ...
├── trainer
│   └── cpu.yaml
...
example_configs/experiment/example.yaml
defaults:
  ...
  - /trainer: cpu.yaml
  ...
example_configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class

default_root_dir: ${output_dir}

log_every_n_steps: 50

accelerator: cpu
devices: 1

max_epochs: 2

Configure Distinct Trainer For Each Task

To configure distinct trainer for each task for your experiment, the YAML file linked in trainer/ subfolder should be a list of lightning.Trainer classes. Each class is assigned to a task. The length of the list must be equal to field num_tasks in experiment index config.

Supported Trainer Options

We fully support all the training options provided by PyTorch Lightning. They are arguments of the lightning.Trainer class, which is called trainer flags in Lightning. Please refer to Lightning documentation for the full list of trainer options.

Lightning Trainer Documentation

Back to top
Configure Learning Rate Scheduler
Configure Lightning Loggers
 
 

©️ 2025 Pengxiang Wang. All rights reserved.