Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 💻 LeetCode Notes
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
    • FG-AdaHAT
    • AmnesiacHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. Welcome to CLArena
  • Welcome to CLArena
  • Getting Started
  • Configure Pipelines
  • Continual Learning (CL)
    • CL Main Experiment
    • Save and Evaluate Model
    • Full Experiment
    • Output Results
  • Continual Unlearning (CUL)
    • CUL Main Experiment
    • Full Experiment
    • Output Results
  • Multi-Task Learning (MTL)
    • MTL Experiment
    • Save and Evaluate Model
    • Output Results
  • Single-Task Learning (STL)
    • STL Experiment
    • Save and Evaluate Model
    • Output Results
  • Components
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Optimizer
    • Learning Rate Scheduler
    • Trainer
    • Metrics
    • Lightning Loggers
    • Callbacks
    • Other Configs
  • Custom Implementation
    • CL Dataset
    • MTL Dataset
    • STL Dataset
    • CL Algorithm
    • CUL Algorithm
    • MTL Algorithm
    • STL Algorithm
    • Backbone Network
    • Callback
  • API Reference
  • FAQs

Welcome to CLArena (Continual Learning Arena)

An open-source machine learning package for continual learning research
Modified

October 21, 2025

Get Started PyPI GitHub API Reference

CLArena (Continual Learning Arena) is an open-source Python package designed for Continual Learning (CL) research. This package provides an integrated environment with extensive APIs for conducting CL experiments, along with pre-implemented algorithms and datasets that you can start using immediately. Explore the datasets and algorithms available in this package:

Supported Algorithms Supported Datasets

Continual learning is a machine learning paradigm that focuses on learning new tasks sequentially while retaining knowledge from previous tasks. If you’re new to continual learning, check out my article continual learning beginners’ guide for an introduction to the field.

Beyond continual learning, this package also features an environment for Continual Unlearning (CUL). Continual unlearning represents a novel paradigm that combines Machine Unlearning (MU) with continual learning scenarios—an emerging research area that remains largely unexplored in the deep learning literature. We are pioneering this field by providing the first comprehensive APIs for continual unlearning. Learn more through my slides about continual unlearning.

This package also provides robust support for Multi-Task Learning (MTL) and standard supervised learning, which we refer to as Single-Task Learning (STL).

This package is currently developed and maintained by myself, built upon several years of continual learning research during my PhD studies. The codebase has been validated through published research papers in continual learning. If you’re interested in the academic work behind this package, you can explore my publications: AdaHAT and FG-AdaHAT.

The package is powered by:

python pytorch lightning hydra

  • PyTorch Lightning: A lightweight PyTorch wrapper framework for high-performance AI research. It eliminates PyTorch boilerplate code such as batch looping, optimizer and loss definitions, and training strategies, allowing you to focus on core algorithm development while maintaining scalability for customization.
  • Hydra: A Python package for elegant configuration management. It transforms command-line parameters into hierarchical configuration files, which is particularly valuable for deep learning projects that typically involve numerous hyperparameters.
Back to top
Getting Started
 
 

©️ 2025 Pengxiang Wang. All rights reserved.