Awesome-MLLM-Hallucination  by showlab

Curated list of resources for multimodal large language model hallucination

created 1 year ago
775 stars

Top 46.0% on sourcepulse

GitHubView on GitHub
Project Summary

This repository serves as a comprehensive, curated collection of resources on hallucination in Multimodal Large Language Models (MLLMs), also known as Large Vision-Language Models (LVLMs). It targets researchers and practitioners by organizing papers, code, and datasets focused on analyzing, detecting, and mitigating visual hallucinations in MLLMs, aiming to improve the faithfulness and reliability of these models.

How It Works

The project categorizes resources into "Hallucination Survey," "Hallucination Evaluation & Analysis," and "Hallucination Mitigation." Papers are primarily listed by their contribution to new benchmarks/metrics or mitigation methods, ordered chronologically from newest to oldest within each category. This structure allows users to quickly identify the latest advancements and relevant techniques for addressing MLLM hallucinations.

Quick Start & Requirements

This is a curated list of research papers and code, not a runnable software package. No installation or execution commands are provided.

Highlighted Details

  • Features a recent, extensive survey paper (40 pages, 228 references) covering MLLM hallucination insights from 2024-2025.
  • Highlights emerging trends such as training-free mitigation, contrastive decoding, and RL-based methods.
  • Includes a wide array of techniques for mitigation, including visual prompting, RAG, rationale reasoning, and generative feedback.
  • Organizes over 200 papers related to MLLM hallucination evaluation and mitigation.

Maintenance & Community

This project is actively maintained and welcomes community contributions via pull requests for missing papers, new research, or corrections. Users can open issues or contact the maintainers directly via email.

Licensing & Compatibility

The repository itself is not software and therefore does not have a specific software license. The linked papers and code will have their own respective licenses.

Limitations & Caveats

As a curated list, this repository does not provide executable code or direct mitigation tools. Users must refer to individual linked papers for implementation details and potential usage. The rapidly evolving nature of MLLM research means the list requires continuous updates.

Health Check
Last commit

3 days ago

Responsiveness

1 week

Pull Requests (30d)
3
Issues (30d)
1
Star History
116 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.