高级检索

类脑持续学习进展与趋势

Progress and Trends of Brain-Inspired Continual Learning

  • 摘要: 持续学习(Continual Learning)致力于使人工智能系统在不断掌握新任务与知识的同时,有效避免对已学内容的灾难性遗忘(Catastrophic Forgetting),从而在可塑性与稳定性之间取得平衡。当前,基于反向传播的神经网络通常依赖全局参数更新,在非平稳数据流中容易发生灾难性遗忘。近年来,大模型通过检索增强、参数高效微调、混合专家系统以及超长上下文等机制,展现出一种“软持续学习”特性,即在上下文学习和外部记忆检索方面表现出与传统持续学习方法不同的优势。然而,这类方法仍面临超长上下文资源消耗大、内部预训练参数在持续训练过程中产生冲突和遗忘等问题。与此同时,类脑持续学习方法受到广泛关注,其借鉴了突触局部学习机制、人脑不同功能分区与核团结构,以及分级记忆系统(如短时/长时记忆、情景/语义记忆)之间的协同作用,以期实现类似人类与现实世界长期、持续交互与学习的能力。本文围绕以下三个方面展开论述:1)灾难性遗忘的产生根源、量化评估指标、经典持续学习方法以及当前大模型在持续学习中的研究进展;2)人类持续学习的行为特征、人脑持续学习的神经机制,以及脑启发算法在持续学习中的应用,包括基于互补学习系统理论的海马-新皮层长短时记忆交互、记忆巩固与重放等机制;3)当前持续学习领域面临的开放挑战与未来评价体系的建议。

     

    Abstract: knowledge over time while preserving previously learned information, thereby balancing plasticity and stability. Traditional neural networks based on backpropagation often suffer from catastrophic forgetting under nonstationary data streams. Recently, large models have exhibited a form of "soft continual learning" through retrieval augmentation, parameter efficient tuning, mixture of experts, and extended context mechanisms. Despite their strengths in contextual learning and external memory access, these approaches still face challenges such as high computational costs and parameter interference during continual updates. Meanwhile, brain inspired continual learning methods gain increasing attention by drawing on synaptic local plasticity, functionally specialized brain regions, and hierarchical memory systems to support long term adaptive learning. This paper reviews three key aspects: the causes and evaluation of catastrophic forgetting, along with classical and modern methods in continual learning; the characteristics and neural mechanisms of human continual learning, including complementary learning systems, memory consolidation, and replay; and open challenges with recommendations for future evaluation and research directions.

     

/

返回文章
返回