Save and Evaluate Model (STL)
CLArena supports saving the model after training each task and evaluating it separately for the single-task learning experiment.
1 Save Model
To save the model after training, enable the callback clarena.callbacks.SaveModels
. Please refer to Configure Callbacks section.
Checkpointing is not used for saving models later for evaluation in CLArena. This is because the model class is needed to load checkpoints, while we expect evaluation model regardless of its type and setting. clarena.callbacks.SaveModels
uses torch.save()
so later evaluation can use torch.load()
to load the model without specifying the model class.
2 Evaluate Model
Single-task learning evaluation pipeline evaluates the saved model trained from single-task learning experiment. Its output results are summarized in Output Results (STL).
Running
To run a single-task learning evaluation, specify the STL_EVAL
indicator in the command:
clarena pipeline=STL_EVAL index=<index-config-name>
Configuration
To run a custom single-task learning evaluation, create a YAML file in the index/
folder as index config. Below is an example.
Example
example_configs/experiment/example_stl_eval.yaml
# @package _global_
# make sure to include the above commented global setting!
# pipeline info
pipeline: STL_EVAL
global_seed: 1
# evaluation target
model_path: outputs/example_stl_expr/2023-10-01_12-00-00/saved_models/stl_model.pth
# components
defaults:
- /stl_dataset: mnist.yaml
- /trainer: cpu_eval.yaml
- /metrics: stl_default.yaml
- /lightning_loggers: default.yaml
- /callbacks: eval_default.yaml
- /hydra: default.yaml
- /misc: default.yaml
output_dir: outputs/example_stl_expr/2023-10-01_12-00-00/eval # output to the same folder as the experiment
Required Config Fields
Below is the list of required config fields for the index config of single-task learning evaluation.
Field | Description | Allowed Values |
---|---|---|
pipeline |
The default pipeline that clarena use the config to run |
|
global_seed |
The global seed for the entire evaluation |
|
model_path |
The file path of the model to evaluate |
|
/stl_dataset |
The single-task learning dataset that the model is evaluated on |
|
/trainer |
The PyTorch Lightning Trainer object which contains all configs for testing process |
|
/metrics |
The metrics to be monitored, logged or visualized |
|
/callbacks |
The callbacks applied to this evaluation experiment (other than metric callbacks). Callbacks are additional actions integrated at different points during the evaluation |
|
/hydra |
Configuration for Hydra |
|
/misc |
Miscellaneous configs that are less related to the experiment |
|
output_dir |
The folder storing the evaluation results |
|
The single-task learning evaluation is managed by a STLEvaluation
class. To learn how these fields work, please refer to its source code.