Skip to content

omnia-postech/REP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

REP: Resource-Efficient Prompting for Rehearsal-Free Continual Learning

Abstract

Recent rehearsal-free continual learning (CL) methods guided by prompts achieve strong performance on vision tasks with non-stationary data but remain resourceintensive, hindering real-world edge deployment. We introduce resource-efficient prompting (REP), which improves the computational and memory efficiency of prompt-based rehearsal-free continual learning methods while minimizing accuracy trade-offs. Our approach employs swift prompt selection to refine input data using a carefully provisioned model and introduces adaptive token merging (AToM) and adaptive layer dropping (ALD) for efficient prompt updates. AToM and ALD selectively skip data and model layers while preserving task-specific features during the learning of new tasks. Extensive experiments on multiple image classification datasets demonstrate REP’s superior resource efficiency over state-of-the-art rehearsal-free CL methods.

Environment

  • Ubuntu 18.04 LTS
  • NVIDIA RTX 3090
  • Python 3.8.8
pytorch==1.12.1 + CUDA 11.3
torchvision==0.13.1
timm==0.6.7
pillow==9.2.0
matplotlib==3.5.3

Reference

l2p-pytorch

dualprompt-pytorch

HiDe-Prompt

ConvPrompt

ToMe

Citation

@inproceedings{
jeon2025rep,
title={REP: Resource-Efficient Prompting for Rehearsal-Free Continual Learning},
author={Sungho Jeon and Xinyue Ma and Kwang In Kim and Myeongjae Jeon},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages