Configure Trainer (STL)
Under the PyTorch Lightning framework, we use the Lightning Trainer object for all training-related configs, such as number of epochs, training strategy, device, etc.
Trainers are a sub-config under the experiment index config (CL Main). To configure a custom trainer, create a YAML file in the trainer/
folder. As continual learning involves multiple tasks, each task can be assigned a trainer. We support a uniform trainer across all tasks and distinct trainers for each task. Below are examples of both configurations.
Example
configs
├── __init__.py
├── entrance.yaml
├── experiment
│ ├── example_clmain_train.yaml
│ └── ...
├── trainer
│ └── cpu.yaml
...
Uniform Trainer for All Tasks
configs/experiment/example_clmain_train.yaml
defaults:
...
- /trainer: cpu.yaml
...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
Distinct Trainer for Each Task
Distinct trainers are specified as a list. The length of the list must match the train_tasks
field in the experiment index config (CL Main). Below is an example for 10 tasks, where tasks 2 and 3 are using GPU while the rest are using CPU.
defaults:
...
- /optimizer: 10_tasks.yaml
...
configs/trainer/10_tasks.yaml
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: gpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: gpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
- _target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
Required Config Fields
The _target_: lightning.Trainer
field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available arguments (trainer flags) of lightning.Trainer
.
Under the framework of PyTorch Lightning, we use the Lightning Trainer object for all configs related to training process, such as number of epochs, training strategy, device, etc.
Trainer is a sub-config under the experiment index config (MTL). To configure a custom trainer, you need to create a YAML file in trainer/
folder. Below shows an example of the Trainer config.
Example
configs
├── __init__.py
├── entrance.yaml
├── experiment
│ ├── example_mtl_train.yaml
│ └── ...
├── trainer
│ └── cpu.yaml
...
configs/experiment/example_mtl_train.yaml
defaults:
...
- /trainer: cpu.yaml
...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
Required Config Fields
The _target_: lightning.Trainer
field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available argument (called trainer flags) of lightning.Trainer
.
Under the framework of PyTorch Lightning, we use the Lightning Trainer object for all configs related to training process, such as number of epochs, training strategy, device, etc.
Trainer is a sub-config under the experiment index config (STL). To configure a custom trainer, you need to create a YAML file in trainer/
folder. Below shows an example of the Trainer config.
Example
configs
├── __init__.py
├── entrance.yaml
├── experiment
│ ├── example_stl_train.yaml
│ └── ...
├── trainer
│ └── cpu.yaml
...
configs/experiment/example_stl_train.yaml
defaults:
...
- /trainer: cpu.yaml
...
configs/trainer/cpu.yaml
_target_: lightning.Trainer # always link to the lightning.Trainer class
default_root_dir: ${output_dir}
log_every_n_steps: 50
accelerator: cpu
devices: 1
max_epochs: 2
Required Config Fields
The _target_: lightning.Trainer
field is required and must always be specified. Please refer to the PyTorch Lightning documentation for full information on the available argument (called trainer flags) of lightning.Trainer
.