Survey for continual learning in LLMs
Top 69.5% on sourcepulse
This repository provides a comprehensive and continuously updated survey of Continual Learning for Large Language Models (CL-LLMs). It serves as a valuable resource for researchers and practitioners in NLP and machine learning, offering a structured overview of methods, applications, and challenges in adapting LLMs to evolving data streams without forgetting previously acquired knowledge.
How It Works
The survey categorizes CL-LLM research into key areas: Continual Pre-Training (CPT), Domain-Adaptive Pre-Training (DAP), Continual Fine-Tuning (CFT), Continual Instruction Tuning (CIT), Continual Model Refinement (CMR), Continual Model Alignment (CMA), and Continual Multimodal LLMs (CMLLMs). It meticulously lists and links to relevant papers, code repositories, and benchmarks, providing a structured landscape of the field.
Quick Start & Requirements
This repository is a survey and does not require installation or execution. It serves as a curated collection of research papers and resources.
Highlighted Details
Maintenance & Community
The project is actively maintained, with frequent updates noted in the "Update History" section, indicating ongoing curation. Contributions are welcomed via pull requests or issues.
Licensing & Compatibility
The repository itself is not software and thus not subject to software licensing. The linked papers and code repositories will have their own respective licenses.
Limitations & Caveats
As a survey, the content's accuracy and completeness depend on the ongoing efforts of the maintainers and the community. The rapid evolution of the CL-LLM field means the survey is a snapshot in time, requiring continuous updates to remain fully comprehensive.
2 months ago
1 week