Save and Evaluate Model (CL Main)
CLArena supports saving model after training each task to local paths and evaluate it separately for the continual learning main experiment (CL Main).
Save Model
To save model after training each task, please enable the callback clarena.callbacks.SaveModelCallback
in the continual learning main experiment. Please refer to Configure Callbacks (CL Main) section.
Evaluate Model
To evaluate saved model, please use clarena eval clmain
command. It will run the continual learning main evaluation on the saved model. It is a type of experiment equivalent to CLMain but skipping the training and validation process. The output results are summarized in Output Results (CL) section.
Usage of clarena eval clmain
The command clarena eval clmain
locates the config folder configs/
, parse the configuration of the specified continual learning main evaluation experiment, and run the experiment:
clarena eval clmain experiment=<experiment-name>
Please make sure the configs/
folder meeting the requirements above exists in the directory where you run the commands. The <experiment-name>
is the path of the YAML file in the experiment/
subfolder. For example, if the YAML file til_pmnist_finetuning.yaml
is in experiment/clmain_eval/
subfolder, the <experiment-name>
is clmain_eval/til_pmnist_finetuning
.
The experiment configs works the same as CL experiment, which has the experiment index config in the experiment/
subfolder, and is organized hierarchically. It also supports overriding. Please refer to Configure CL Main Experiment to learn about these features. Below shows an example and the required fields of the experiment index config.
Example
configs/experiment/example_clmain_eval.yaml
# @package _global_
# make sure to include the above commented global setting!
model_path: outputs/til_pmnist_finetuning/2023-10-01_12-00-00/saved_models/model.pth
cl_paradigm: TIL
eval_tasks: 10
global_seed: 1
defaults:
- /cl_dataset: permuted_mnist.yaml
- /trainer: cpu.yaml
- /metrics: cl_default.yaml
- /callbacks: cl_default.yaml
- /hydra: default.yaml
- /misc: default.yaml
output_dir: outputs/til_pmnist_finetuning/2023-10-01_12-00-00
Required Config Fields
Field | Description | Allowed Values |
---|---|---|
main_model_path |
The file path of the model to evaluate |
|
eval_tasks |
The list of task IDs1 to evaluate |
|
cl_paradigm |
The continual learning paradigm |
|
global_seed |
The global seed for the experiment. It helps reproduce the results |
|
/cl_dataset |
The continual learning dataset that the model is evaluated on |
|
/trainer |
The PyTorch Lightning Trainer object which contains all configs for testing process |
|
/metrics |
The metrics to be monitored, logged or visualized |
|
/callbacks |
The callbacks applied to this evaluation experiment. Callbacks are additional actions integrated in different timing of the experiment |
|
output_dir |
The folder name storing the evaluation results. Please refer to Output Results section |
|
/hydra |
Configuration for Hydra itself |
|
/misc |
Miscellaneous configs that are less related to the experiment |
|
The continual learning main evaluation experiment run by clarena eval clmain
is managed by a CLMainEval
class. To learn how these fields work, please refer to its source code.
Footnotes
The task IDs are integers starting from 1, ending with number of tasks of the CL dataset. Each corresponds to a task-specific dataset in the CL dataset.↩︎