Abstract:
knowledge over time while preserving previously learned information, thereby balancing plasticity and stability. Traditional neural networks based on backpropagation often suffer from catastrophic forgetting under nonstationary data streams. Recently, large models have exhibited a form of "soft continual learning" through retrieval augmentation, parameter efficient tuning, mixture of experts, and extended context mechanisms. Despite their strengths in contextual learning and external memory access, these approaches still face challenges such as high computational costs and parameter interference during continual updates. Meanwhile, brain inspired continual learning methods gain increasing attention by drawing on synaptic local plasticity, functionally specialized brain regions, and hierarchical memory systems to support long term adaptive learning. This paper reviews three key aspects: the causes and evaluation of catastrophic forgetting, along with classical and modern methods in continual learning; the characteristics and neural mechanisms of human continual learning, including complementary learning systems, memory consolidation, and replay; and open challenges with recommendations for future evaluation and research directions.