Shawn’s Blog
  • πŸ—‚οΈ Collections
    • πŸ–₯️ Slides Gallery
    • πŸ’» LeetCode Notes
    • πŸ§‘β€πŸ³οΈ Cooking Ideas
    • 🍱 Cookbook
    • πŸ’¬ Language Learning
    • 🎼 Songbook
  • βš™οΈ Projects
    • βš› Continual Learning Arena
  • πŸ“„ Papers
    • AdaHAT
    • FG-AdaHAT
    • AmnesiacHAT
  • πŸŽ“ CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Components
  2. Backbone Network
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Custom Implementation
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Callback
  • API Reference
  • FAQs

On this page

  • Example
  • Supported Backbones & Required Config Fields
    • Multi-Task and Single-Task Learning Backbones
    • Continual Learning Backbones
      • HAT Backbones
      • WSN Backbones
  1. Components
  2. Backbone Network

Configure Backbone Network

Modified

October 9, 2025

The backbone network refers to the feature extractor that precedes the output heads, i.e. a neural network whose head is truncated so it outputs a feature vector. In continual learning, the backbone is typically shared among tasks. However, in some CL approaches (particularly architecture-based approaches), the backbone is dynamic: it can expand, incorporate additional mechanisms like masks, or even assign different networks to each task. If you are not familiar with continual learning backbones, feel free to learn more in my beginners’ guide: backbone network and architecture-based approaches.

In multi-task learning, the backbone is always shared among tasks. In single-task learning, the backbone is simply a feature extractor.

The backbone is a sub-config under the index config of:

  • Continual learning main experiment
  • Continual learning full experiment and the reference experiments
  • Continual unlearning main experiment
  • Continual unlearning full experiment and the reference experiments
  • Multi-task learning experiment
  • Single-task learning experiment

To configure a custom backbone, create a YAML file in the backbone/ folder. Below is an example of the backbone config.

Example

configs
β”œβ”€β”€ __init__.py
β”œβ”€β”€ entrance.yaml
β”œβ”€β”€ index
β”‚   β”œβ”€β”€ example_cl_main_expr.yaml
β”‚   └── ...
β”œβ”€β”€ backbone
β”‚   └── clmlp.yaml
...
example_configs/index/example_cl_main_expr.yaml
defaults:
  ...
  - /backbone: clmlp.yaml
  ...
example_configs/backbone/clmlp.yaml
_target_: clarena.backbones.CLMLP
input_dim: 784
hidden_dims: [256, 100]
output_dim: 64
batch_normalization: true

Supported Backbones & Required Config Fields

In CLArena, we have implemented many backbone networks as Python classes in the clarena.backbones module that you can use for your experiments.

To choose a backbone, assign the _target_ field to the class name of the backbone. For example, to use CLMLP, set the _target_ field to clarena.backbones.CLMLP. Each backbone has its own hyperparameters and configurations, which means it has its own required fields. The required fields are the same as the arguments of the class specified by _target_. The arguments for each backbone class can be found in the API documentation.

API Reference (Backbone Networks) Source Code (Backbone Network)

Below is the full list of supported backbone networks. Note that the names in the β€œBackbone” column are the exact class names that you should assign to _target_.

Multi-Task and Single-Task Learning Backbones

These backbones are direct implementations of neural network backbones, which can be generally used in multi-task and single-task learning unless noted otherwise.

Backbone Description Required Config Fields
MLP Multi-layer Perceptron (MLP). MLP is a fully-connected network with all linear layers. The hidden dimension and number of layers are customizable Same as MLP class arguments
ResNet18, ResNet34, ResNet52, ResNet101, ResNet152 Different versions of ResNet. ResNet is a convolutional network with residual connections Same as ResNet18 class, ResNet34 class, ResNet52 class, ResNet101 class, ResNet152 class arguments

Continual Learning Backbones

These backbones are implemented with addition features for continual learning, which can be generally used in continual learning and unlearning, unless noted otherwise.

Backbone Description Required Config Fields
CLMLP Multi-layer Perceptron (MLP) for continual learning Same as CLMLP class arguments
CLResNet18, CLResNet34, CLResNet52, CLResNet101, CLResNet152 ResNets for continual learning Same as CLResNet18 class, CLResNet34 class, CLResNet52 class, CLResNet101 class, CLResNet152 class arguments

For architecture-based continual learning, the backbone is specialized for the algorithm by introducing new mechanisms like masks. We provide specialized backbones for several architecture-based CL algorithms.

HAT Backbones

These backbones should be used with the CL algorithm HAT and its extensions AdaHAT, FGAdaHAT. Please refer to Configure CL Algorithm.

Specialized Backbone Description Required Config Fields
HATMaskMLP Multi-layer Perceptron (MLP) for HAT Same as HATMaskMLP class arguments
HATMaskResNet18, HATMaskResNet34, HATMaskResNet52, HATMaskResNet101, HATMaskResNet152 ResNets for HAT Same as HATMaskResNet18 class, HATMaskResNet34 class, HATMaskResNet52 class, HATMaskResNet101 class, HATMaskResNet152 class arguments

WSN Backbones

These backbones should be used with the CL algorithm WSN. Please refer to Configure CL Algorithm.

Specialized Backbone Description Required Config Fields
WSNMaskMLP Multi-layer Perceptron (MLP) for WSN Same as WSNMaskMLP class arguments
Back to top
STL Algorithm
Optimizer
 
 

©️ 2025 Pengxiang Wang. All rights reserved.