Discover and explore top open-source AI tools and projects—updated daily.
Hallucination detection resources for large language models
Top 38.4% on SourcePulse
This repository is a curated list of papers focused on detecting and mitigating hallucinations in Large Language Models (LLMs). It serves researchers and practitioners aiming to improve the factual accuracy and trustworthiness of LLM outputs across various domains, including question answering, summarization, and vision-language tasks.
How It Works
The collection highlights diverse approaches to hallucination detection and mitigation. Methods range from analyzing semantic similarities and embedding spaces to leveraging internal model states, external knowledge bases, and even fine-grained AI feedback. Some papers focus on preemptive detection before generation, while others propose post-generation correction or uncertainty quantification techniques.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
3 months ago
1 day