Curated list of papers on ML membership inference attacks/defenses
Top 81.9% on sourcepulse
This repository is a curated, chronologically sorted list of academic papers on membership inference attacks (MIAs) and defenses against them in machine learning. It serves as a comprehensive resource for researchers and practitioners investigating privacy vulnerabilities in ML models, particularly focusing on Large Language Models (LLMs) and generative models.
How It Works
The repository organizes papers by year and categorizes them into "Attack Papers" and "Defense Papers." Each entry includes the paper's title, adversarial knowledge type (white-box/black-box), target model, venue, and links to the paper and, where available, its code. The list is updated regularly and aims to cover the latest research in MIA.
Quick Start & Requirements
No installation or execution is required. This is a static list of research papers.
Highlighted Details
Maintenance & Community
The repository is maintained by HongshengHu. There are no explicit community links (e.g., Discord, Slack) or roadmap details provided.
Licensing & Compatibility
The repository itself does not contain code that would typically require licensing. It is a collection of links to external academic papers.
Limitations & Caveats
The repository is a curated list and does not provide tools or implementations for performing or defending against membership inference attacks. Users must follow the provided links to access the actual research papers and any associated code.
1 month ago
Inactive