Awesome-LVLM-Hallucination  by NishilBalar

LVLM Hallucination Research Hub

Created 2 years ago
283 stars

Top 92.5% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

This repository addresses the critical issue of hallucinations in Large Vision-Language Models (LVLMs), which manifest as generated text containing information not present in the visual input. It serves as an up-to-date, curated collection of state-of-the-art research papers, code, and resources focused on detecting, evaluating, and mitigating these hallucinations. The primary benefit for researchers and engineers is a centralized, organized platform for accessing and understanding the rapidly evolving landscape of LVLM hallucination research.

How It Works

The project functions as a curated knowledge base, systematically listing and briefly describing research papers, benchmarks, and mitigation techniques related to LVLM hallucinations. This approach provides a structured overview of the field, enabling users to quickly identify relevant work, understand different approaches to hallucination evaluation (e.g., CHAIR, POPE, MME, HallusionBench), and explore various mitigation strategies. The advantage lies in its comprehensive aggregation of disparate research efforts into a single, accessible resource.

Quick Start & Requirements

This repository is a curated list of research resources and does not contain executable code or a direct installation process. Therefore, a "Quick Start & Requirements" section is not applicable.

Highlighted Details

  • Comprehensive catalog of over 100 research papers, benchmarks, and mitigation techniques specifically targeting LVLM hallucinations.
  • Detailed listings of numerous evaluation benchmarks, including CHAIR, POPE, MME, HallusionBench, AMBER, and others, covering various hallucination types like object existence, attributes, and relationships.
  • Extensive coverage of mitigation strategies, ranging from novel training objectives (ObjMLM) and fine-tuning datasets (LRV-Instruction) to advanced decoding techniques (VCD, ICD, CGD) and RLHF-based approaches (LLaVA-RLHF).
  • Regularly updated with recent research, indicated by numerous entries marked "soon" for upcoming papers and ongoing work.

Maintenance & Community

The repository encourages community contributions through open issues for suggestions and aims to foster fruitful discussion. Specific links to community platforms (e.g., Discord, Slack) or details on core maintainers are not provided.

Licensing & Compatibility

No licensing information is specified in the provided README content. Therefore, compatibility for commercial use or closed-source linking cannot be determined.

Limitations & Caveats

As a curated list, this repository does not offer executable software or direct tools for hallucination mitigation. Its value is purely informational, requiring users to seek out and implement the referenced research independently. The presence of numerous "soon" entries suggests that the catalog is a work in progress and may not yet be exhaustive.

Health Check
Last Commit

1 month ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
18 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Travis Fischer Travis Fischer(Founder of Agentic), and
1 more.

HaluEval by RUCAIBox

1.1%
562
Benchmark dataset for LLM hallucination evaluation
Created 2 years ago
Updated 2 years ago
Feedback? Help us improve.