Welcome to CLArena (Continual Learning Arena)
CLArena (Continual Learning Arena) is an open-source Python package designed for Continual Learning (CL) research. This package provides an integrated environment with extensive APIs for conducting CL experiments, along with pre-implemented algorithms and datasets that you can start using immediately. Explore the datasets and algorithms available in this package:
Continual learning is a machine learning paradigm that focuses on learning new tasks sequentially while retaining knowledge from previous tasks. If you’re new to continual learning, check out my article continual learning beginners’ guide for an introduction to the field.
Beyond continual learning, this package also features an environment for Continual Unlearning (CUL). Continual unlearning represents a novel paradigm that combines Machine Unlearning (MU) with continual learning scenarios—an emerging research area that remains largely unexplored in the deep learning literature. We are pioneering this field by providing the first comprehensive APIs for continual unlearning. Learn more through my slides about continual unlearning.
This package also provides robust support for Multi-Task Learning (MTL) and standard supervised learning, which we refer to as Single-Task Learning (STL).
This package is currently developed and maintained by myself, built upon several years of continual learning research during my PhD studies. The codebase has been validated through published research papers in continual learning. If you’re interested in the academic work behind this package, you can explore my publications: AdaHAT and FG-AdaHAT.
The package is powered by:
- PyTorch Lightning: A lightweight PyTorch wrapper framework for high-performance AI research. It eliminates PyTorch boilerplate code such as batch looping, optimizer and loss definitions, and training strategies, allowing you to focus on core algorithm development while maintaining scalability for customization.
- Hydra: A Python package for elegant configuration management. It transforms command-line parameters into hierarchical configuration files, which is particularly valuable for deep learning projects that typically involve numerous hyperparameters.