Discover and explore top open-source AI tools and projects—updated daily.
Survey of forgetting in deep learning beyond continual learning
Top 84.5% on SourcePulse
This repository is a curated collection of research papers on "Forgetting in Deep Learning," extending beyond the traditional focus on continual learning. It aims to provide a comprehensive overview for researchers and practitioners interested in understanding and addressing knowledge loss in various deep learning contexts, including foundation models, generative models, and federated learning. The project highlights that forgetting can be both detrimental and beneficial, offering a nuanced perspective on its role in AI.
How It Works
The repository organizes papers into categories based on the specific domain or problem setting where forgetting is observed. It covers areas like Continual Learning (task-aware, task-free, online, semi-supervised, few-shot, unsupervised), Foundation Models (fine-tuning, one-epoch pre-training), Domain Adaptation, Test-Time Adaptation, Meta-Learning, Generative Models, Reinforcement Learning, and Federated Learning. It also includes sections on beneficial forgetting, such as combating overfitting and machine unlearning.
Highlighted Details
Maintenance & Community
The repository is maintained by EnnengYang and Zhenyi Wang, with contact emails provided for contributions.
Licensing & Compatibility
The repository itself is a collection of links to research papers and does not appear to have a specific software license. Compatibility depends on the licenses of the linked research papers.
Limitations & Caveats
This repository is a curated list of papers and does not provide code, implementations, or benchmarks. Its primary function is as a literature review and resource guide.
6 days ago
Inactive