Curated list of attacks on large vision-language models (LVLMs)
Top 81.3% on sourcepulse
This repository is a curated list of research papers focusing on attacks against Large Vision-Language Models (LVLMs). It serves as a comprehensive resource for researchers and practitioners interested in the security vulnerabilities and adversarial robustness of multimodal AI systems. The collection aims to track the latest advancements in LVLM attack methodologies, including adversarial attacks, prompt injection, data poisoning, and specialized attacks for LVLM applications.
How It Works
The repository functions as a dynamic, continuously updated bibliography. It categorizes LVLM attacks into distinct types, providing links to relevant research papers, often with associated GitHub repositories for code implementations or further details. The primary goal is to consolidate and organize the rapidly evolving field of LVLM security research, making it accessible to the community.
Quick Start & Requirements
This repository is a curated list of papers and does not have a direct installation or execution process. It requires no specific software or hardware to access.
Highlighted Details
Maintenance & Community
The repository is maintained by liudaizong and welcomes contributions for missed papers via email (dzliu@stu.pku.edu.cn). The primary citation is provided for the survey paper.
Licensing & Compatibility
The repository itself is a list of links and does not impose a license. Individual linked papers and code repositories will have their own respective licenses.
Limitations & Caveats
This is a curated list and does not provide tools or implementations for performing the attacks. The focus is purely on the research literature. The rapid pace of research means the list may not be exhaustive at any given moment.
3 days ago
1 day