Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models

arXiv:2312.08888 [cs.LG] doi: 10.48550/arXiv.2312.08888 - Dec 2023
Associated documents :  
We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.

 

@Article{ALLW23, 
 	 author =  {Ahrens, Kyra and Lehmann, Hans Hergen and Lee, Jae Hee and Wermter, Stefan},  
 	 title = {Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models}, 
 	 booktitle = {},
 	 journal = {arXiv:2312.08888 [cs.LG]},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2023},
 	 month = {Dec},
 	 publisher = {},
 	 doi = {10.48550/arXiv.2312.08888}, 
 }