Skip to content

NeuroSyd/Awesome-Realistic-Continual-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 

Repository files navigation

Awesome Realistic Continual Learning Awesome

A curated list of papers, codebases, and datasets for realistic continual learning, emphasizing neuroscience-inspired approaches and practical deployment scenarios. Last updated: 2025-10-06

Traditional vs Realistic CL


📣 Citing Our Work

This repository is the official resource hub for our survey paper "A Neuroscience-Inspired Framework for Realistic Continual Learning: A Review". If you find this repository or our survey useful for your research, please consider citing our work:

@article{aguilar2025neuroscience,
  title={A Neuroscience-Inspired Framework for Realistic Continual Learning: A Review},
  author={Aguilar, Isabelle and Kembay, Assel and Eshraghian, Jason K. and Kavehei, Omid},
  journal={TBD},
  year={2025}
}

📜 Overview

Traditional continual learning research often relies on simplified assumptions that don't reflect real-world deployment scenarios. This repository focuses on realistic continual learning - approaches that bridge the gap between laboratory settings and practical applications.

Our framework for realistic continual learning emphasizes three key components:

  • Task-agnostic settings with non-i.i.d. data streams
  • Resource-constrained environments suitable for edge deployment
  • Robust evaluation protocols beyond simple accuracy metrics

Key Challenges in Realistic Continual Learning

🧠 Catastrophic Forgetting: The devastating loss of previously learned knowledge when adapting to new tasks

⚖️ Stability-Plasticity Dilemma: Balancing retention of past memories with adaptation to new information

🌊 Representation Drift: Gradual changes in feature representations over time

💻 Resource Constraints: Memory, computational, and energy limitations in deployment environments

🎯 Task-Agnostic Learning: Learning without explicit task boundaries or identities

🧠 Neuroscience-Inspired Solutions

Drawing from neuroscience principles, we identify key mechanisms that can inform more robust continual learning systems:

🔄 Synaptic Plasticity Mechanisms

  • Hebbian Learning: "Neurons that fire together wire together"
  • Spike-Timing Dependent Plasticity (STDP): Temporal-based synaptic modifications
  • Metaplasticity: The plasticity of plastic synapses for regulating adaptation

🏗️ Complementary Learning Systems (CLS)

  • Fast Learning: Hippocampal-inspired rapid episodic encoding
  • Slow Learning: Neocortical-inspired gradual consolidation
  • Memory Replay: Sleep-inspired replay mechanisms for knowledge retention

🧩 Memory Systems

  • Working Memory: Temporary maintenance and manipulation buffers
  • Episodic Memory: Event-specific representations and context
  • Semantic Memory: Abstracted knowledge and generalization

🌙 Sleep and Consolidation

  • Active Forgetting: Selective removal of irrelevant information
  • Memory Consolidation: Stabilization and integration processes
  • Replay Mechanisms: Reactivation of past experiences during rest

🏗️ Framework Components

📊 Realistic Benchmarks

Moving beyond simplified datasets to more challenging, real-world scenarios:

Dataset Type Characteristics Realistic Aspects
CORe50 Object Recognition 50 objects, 11 sessions Temporal evolution, lighting changes
CLEAR Image Classification 10+ years of data Natural temporal shifts, web-scale
DomainNet Multi-domain 6 domains, 345 classes Domain adaptation, style shifts
CLAD Autonomous Driving Detection + classification Real driving scenarios, weather
TRACE Language Models Dynamic text streams Concept drift, temporal evolution

Why avoid Permuted MNIST?

  • Tasks are artificially separable with predictable interference
  • Minimal catastrophic forgetting in basic MLPs
  • Lacks semantic relationships between tasks
  • Over 500 papers since 2020 used this problematic benchmark

🔧 Training Protocols

Traditional (Unrealistic):

  • Offline batch processing
  • Multiple epochs per task
  • Fixed task boundaries
  • i.i.d. data assumptions

Realistic:

  • Online streaming data
  • Single-pass learning
  • Task-free environments
  • Non-stationary distributions
  • Resource constraints

📏 Evaluation Metrics

Beyond accuracy, realistic evaluation requires:

Core Performance

  • Average Accuracy (AA): Overall performance across tasks
  • Backward Transfer (BWT): Forgetting measurement
  • Forward Transfer (FWT): Knowledge transfer to future tasks

Efficiency Metrics

  • Memory Footprint: Storage requirements
  • Computational Cost: FLOPs and energy consumption
  • Adaptation Speed: Time to learn new tasks
  • Scalability: Performance with increasing complexity

📄 Papers by Category

🔄 Replay-Based Methods

Methods that store and revisit past experiences:

2023

  • [TMLR] SIESTA: Efficient Online Continual Learning with Sleep. [PDF] [CODE]

2022

  • [Nature Comms] Sleep-like unsupervised replay reduces catastrophic forgetting in artificial neural networks. [PDF] [CODE]
  • [CVPR] GCR: Gradient coreset based replay buffer selection for continual learning. [PDF]
  • [Progress in Neurobiology] The hippocampal formation as a hierarchical generative model supporting generative replay and continual learning. [PDF]

2021

  • [Neural Computation] Replay in deep learning: Current approaches and missing biological elements. [PDF]

2020

  • [Nature Communications] Brain-inspired replay for continual learning with artificial neural networks. [PDF] [CODE]
  • [ECCV] Dark experience for general continual learning: A strong, simple baseline. [PDF] [CODE]

2019

  • [NeurIPS] Experience replay for continual learning. [PDF]
  • [ICLR] Learning to learn without forgetting by maximizing transfer and minimizing interference. [PDF] [CODE]
  • [ICLR] Efficient lifelong learning with A-GEM. [PDF] [CODE]

2017

  • [NeurIPS] Gradient episodic memory for continual learning. [PDF] [CODE]
  • [NeurIPS] Continual Learning with Deep Generative Replay. [PDF]
  • [CVPR] iCaRL: Incremental classifier and representation learning. [PDF] [CODE]

⚙️ Regularization-Based Methods

Approaches using constraints to prevent forgetting:

2025

  • [Nature Communications] Hybrid neural networks for continual learning inspired by corticohippocampal circuits. [PDF] [CODE]
  • [npj Unconventional Computing] Continuous metaplastic training on brain signals. [PDF] [CODE]

2024

  • [ICLR] Meta continual learning revisited: Implicitly enhancing online hessian approximation via variance reduction. [PDF] [CODE]

2023

  • [Biological Cybernetics] Bio-inspired, task-free continual learning through activity regularization. [PDF] [CODE]
  • [Neural Networks] A domain-agnostic approach for characterization of lifelong learning systems. [PDF]
  • [ICLR] Continual evaluation for lifelong learning: Identifying the stability gap. [PDF] [CODE]

2022

  • [Trends in Neurosciences] Contributions by metaplasticity to solving the catastrophic forgetting problem. [PDF]
  • [WACV] Online continual learning via candidates voting. [PDF]
  • [PMLR] Online continual learning through mutual information maximization. [PDF] [CODE]
  • [Neurocomputing] Online continual learning in image classification: An empirical survey. [PDF] [CODE]

2021

  • [Nature Communications] Synaptic metaplasticity in binarized neural networks. [PDF] [CODE]
  • [IEEE TNNLS] Triple-memory networks: A brain-inspired method for continual learning. [PDF]
  • [IEEE TPAMI] A continual learning survey: Defying forgetting in classification tasks. [PDF] [CODE]

2019

  • [NeurIPS] Meta-learning representations for continual learning. [PDF] [CODE]
  • [ICCV] Overcoming catastrophic forgetting with unlabeled data in the wild. [PDF] [CODE]

2018

  • [PMLR] Progress & compress: A scalable framework for continual learning. [PDF]
  • [ECCV] Memory Aware Synapses: Learning what (not) to forget. [PDF]

Classic Papers

  • [PNAS 2017] Overcoming catastrophic forgetting in neural networks. [PDF]
  • [ICML 2017] Continual learning through synaptic intelligence. [PDF]
  • [TPAMI 2017] Learning without forgetting. [PDF]

🏗️ Architectural Methods

Approaches that modify network structure:

2024

  • [CVPR] Continual-Zoo: Leveraging zoo models for continual classification of medical images. [PDF]
  • [TPAMI] Continual Learning: Forget-free Winning Subnetworks for Video Representations. [PDF]
  • [IJCAI] Continual learning with pre-trained models: A survey. [PDF] [CODE]

2023

  • [NeurIPS] RanPAC: Random projections and pre-trained models for continual learning. [PDF] [CODE]
  • [ICCV] SLCA: Slow learner with classifier alignment for continual learning on a pre-trained model. [PDF] [CODE]

2022

  • [ICML] Forget-free continual learning with winning subnetworks. [PDF] [CODE]
  • [ICML] VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty. [PDF]
  • [arXiv] A simple baseline that questions the use of pretrained-models in continual learning. [PDF] [CODE]

2018

  • [ICLR] Lifelong Learning with Dynamically Expandable Networks. [PDF]
  • [CVPR] PackNet: Adding multiple tasks to a single network by iterative pruning. [PDF]
  • [ECCV] Piggyback: Adapting a single network to multiple tasks by learning to mask weights. [PDF]

2016

  • [arXiv] Progressive neural networks. [PDF]

🧠 Neuroscience-Inspired Methods

Approaches directly inspired by biological mechanisms:

2025

  • [ISCAS] A quantitative analysis of catastrophic forgetting in quantized spiking neural networks. [PDF] [CODE]

2024

  • [AAAI] Efficient spiking neural networks with sparse selective activation for continual learning. [PDF]
  • [Nature Machine Intelligence] A collective AI via lifelong learning and sharing at the edge. [PDF]
  • [PNAS Nexus] Neuromorphic neuromodulation: Towards the next generation of closed-loop neurostimulation. [PDF]
  • [arXiv] Future-Guided Learning: A Predictive Approach To Enhance Time-Series Forecasting. [PDF] [CODE]

2023

  • [Neural Computation] Reducing catastrophic forgetting with associative learning: A lesson from fruit flies. [PDF]
  • [Nature Machine Intelligence] Incorporating neuro-inspired adaptability for continual learning in artificial intelligence. [PDF]
  • [Neurocomputing] Spiking neural predictive coding for continually learning from data streams. [PDF]
  • [IJCAI] Enhancing efficient continual learning with dynamic structure development of spiking neural networks. [PDF] [CODE]
  • [ICLR] Sparse distributed memory is a continual learner. [PDF] [CODE]
  • [bioRxiv] Informing generative replay for continual learning with long-term memory formation in the fruit fly. [PDF]

2022

  • [Frontiers in Computational Neuroscience] Bayesian continual learning via spiking neural networks. [PDF] [CODE]

2021

  • [Nature Communications] Synaptic metaplasticity in binarized neural networks. [PDF] [CODE]
  • [IEEE TNNLS] Triple-memory networks: A brain-inspired method for continual learning. [PDF]
  • [ISCAS] MetaplasticNet: Architecture with probabilistic metaplastic synapses for continual learning. [PDF]
  • [arXiv] Algorithmic insights on continual learning from fruit flies. [PDF]

2020

  • [Nature Communications] Brain-inspired replay for continual learning with artificial neural networks. [PDF] [CODE]
  • [Frontiers in Neuroscience] Controlled forgetting: Targeted stimulation and dopaminergic plasticity modulation for unsupervised lifelong learning in spiking neural networks. [PDF]

🔄 Prompt-Based & Pre-trained Methods

Modern approaches using prompts and foundation models:

2024

  • [CVPR] Interactive continual learning: Fast and slow thinking. [PDF]
  • [WACV] Plasticity-optimized complementary networks for unsupervised continual learning. [PDF] [CODE]

2023

  • [CVPR] CODA-Prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning. [PDF] [CODE]
  • [ICLR] Progressive prompts: Continual learning for language models. [PDF]
  • [TPAMI] Continual learning, fast and slow. [PDF] [CODE]

2022

  • [CVPR] Learning to prompt for continual learning. [PDF] [CODE]
  • [ECCV] DualPrompt: Complementary prompting for rehearsal-free continual learning. [PDF]
  • [ICLR] Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System. [PDF] [CODE]

2021

  • [NeurIPS] DualNet: Continual learning, fast and slow. [PDF] [CODE]

📚 Survey & Review Papers

2025

  • [Trends in Cognitive Sciences] Memory updating and the structure of event representations. [PDF]

2024

  • [Psychonomic Bulletin & Review] Surprise!—Clarifying the link between insight and prediction error. [PDF]

2023

  • [arXiv] Continual learning: Applications and the road forward. [PDF]
  • [Scientific Data] MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification. [PDF]

2022

  • [Nature Machine Intelligence] Three types of incremental learning. [PDF]
  • [JAIR] Towards continual reinforcement learning: A review and perspectives. [PDF]
  • [arXiv] Lifelong learning metrics. [PDF]

2020

  • [Trends in Cognitive Sciences] Embracing change: Continual learning in deep neural networks. [PDF]

2019

  • [Neural Networks] Continual lifelong learning with neural networks: A review. [PDF]

🗂️ Datasets & Benchmarks

Realistic Benchmarks

Dataset Task Type CL Scenario Scale Realistic Aspects Link
CORe50 Object Recognition CIL/DIL 164K images Temporal sessions, lighting [GitHub]
CLEAR Classification DIL 12M+ images 10+ years evolution [Homepage]
DomainNet Multi-domain DIL 600K images 6 domains, style shifts [Link]
CLAD Autonomous Driving CIL/DIL OD Variable Weather, lighting changes [Paper]
TRACE Language TIL Large Temporal concept drift [GitHub]

Medical & Specialized

Dataset Domain CL Scenario Scale Realistic Aspects Link
MedMNIST v2 Medical Imaging CIL/DIL/TIL 700K+ images Multi-modal medical data [Homepage]
Cityscapes Urban Scenes CIL/DIL Segmentation 25K images Semantic segmentation [Link]

⚠️ Problematic Benchmarks (Avoid)

  • Permuted MNIST: Artificially easy, misleading results
  • Split MNIST: Over-simplified, unrealistic task boundaries
  • Rotated MNIST: Lacks semantic complexity

🤝 How to Contribute

We welcome contributions from the research community! Here's how you can help:

Adding Papers

  1. Fork this repository
  2. Add your paper to the appropriate category in README.md
  3. Follow the format: **[Venue Year]** Title. [[PDF]](link) [[CODE]](link)
  4. Highlight realistic aspects in your description
  5. Submit a pull request

Paper Categories

  • Replay-Based Methods: Memory storage and rehearsal approaches
  • Regularization-Based Methods: Constraint-based forgetting prevention
  • Architectural Methods: Structure modification approaches
  • Neuroscience-Inspired Methods: Biologically-motivated techniques
  • Prompt-Based & Pre-trained Methods: Modern foundation model approaches

Adding Datasets/Benchmarks

  1. Add to the appropriate datasets table
  2. Include realistic aspects that make it suitable for deployment
  3. Provide official links and statistics

Quality Guidelines

  • Relevance: Focus on realistic, deployable continual learning
  • Novelty: Avoid duplicate entries
  • Quality: Peer-reviewed or high-quality preprints preferred
  • Completeness: Include PDF and code links when available

Suggesting New Categories

  1. Open an issue to discuss new taxonomy categories
  2. Provide justification based on the realistic CL framework
  3. Suggest papers that would fit the new category

🌟 Related Resources

🛠️ Maintenance

Maintainers: Assel Kembay, Isabelle Aguilar Contact: iagu0459@uni.sydney.edu.au Update Schedule: Monthly updates with new papers and resources
Last Updated: 2025-10-06


Star this repo to show your support for realistic continual learning research!

📜 License

This repository is licensed under MIT License.

Acknowledgments

This repository builds upon our survey "A Neuroscience-Inspired Framework for Realistic Continual Learning: A Review" and the incredible efforts of the continual learning research community. We thank all researchers working toward making continual learning practical and deployable in real-world scenarios.

Special thanks to the neuroscience community for providing foundational insights that inspire more robust artificial learning systems.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors