Shawn’s Blog
  • 🗂️ Collections
    • 🖥️ Slides Gallery
    • 📚 Library (Reading Notes)
    • 🧑‍🍳️ Cooking Ideas
    • 🍱 Cookbook
    • 💬 Language Learning
    • 🎼 Songbook
  • ⚙️ Projects
    • ⚛ Continual Learning Arena
  • 📄 Papers
    • AdaHAT
  • 🎓 CV
    • CV (English)
    • CV (Mandarin)
  • About
  1. CLArena
  2. Welcome Page
  • CLArena
    • Welcome Page
    • Get Started
    • Configure Your Experiment
      • Experiment Index Config
      • Configure CL Algorithm
      • Configure CL Dataset
      • Configure Backbone Network
      • Configure Optimizer
      • Configure Learning Rate Scheduler
      • Configure Trainer
      • Configure Lightning Loggers
      • Configure Callbacks
      • Other Configs
    • Implement Your CL Modules
    • API Reference
  1. CLArena
  2. Welcome Page

Welcome to CLArena (Continual Learning Arena)

An open-source machine learning package for continual learning research

Get Started PyPI GitHub API Reference

CLArena (Continual Learning Arena) is an open-source Python package for Continual Learning (CL) research. In this package, we provide an integrated environment and various APIs to conduct CL experiments for research purposes, as well as implemented CL algorithms and datasets that you can give it a spin immediately. Check out what datasets and algorithms are implemented in this package:

Supported Algorithms Supported Datasets

Continual learning is a paradigm of machine learning that deals with learning new tasks sequentially without forgetting previous tasks. Feel free to check out my post beginner’s guide for continual learning if you haven’t know much about CL yet.

Until now this package is solely developed by myself. It is adapted from the codes in my several years’ continual learning research as a PhD student. I have published a paper in continual learning using these codes, feel free to check out if you are interested: Paper: AdaHAT.

The package is powered by:

python pytorch lightning hydra

  • PyTorch Lightning - a lightweight PyTorch wrapper for high-performance AI research. It removes boilerplate part of PyTorch code like batch looping, defining optimisers and losses, training strategies, so you can focus on the core algorithm part. It also keeps scalability for customisation if needed.
  • Hydra - a framework for organising configurations elegantly. It can turn parameters from Python command line into hierarchical config files, which is nice for deep learning which usually has tons of hyperparameters.
Back to top
Get Started
 
 

©️ 2025 Pengxiang Wang. All rights reserved.