Survey paper list on LLM hallucination
Top 86.0% on sourcepulse
This repository provides a comprehensive survey of hallucination in Large Language Models (LLMs), categorizing causes, detection methods, and mitigation strategies. It serves as a valuable resource for researchers and practitioners aiming to understand and address the phenomenon of LLM hallucination.
How It Works
The survey systematically categorizes LLM hallucinations into factuality and faithfulness types. It then breaks down the causes into data, model training, and inference stages, providing a structured overview of the problem space. The repository also curates extensive lists of papers related to each category, including surveys, detection benchmarks, and mitigation techniques.
Quick Start & Requirements
This repository is a curated list of research papers and does not require installation or execution. The primary resource is the survey paper itself, available on arXiv.
Highlighted Details
Maintenance & Community
The repository is maintained by LuckyyySTA and associated authors from Harbin Institute of Technology and Huawei Inc. The first version of the paper was released on arXiv in November 2023.
Licensing & Compatibility
The repository itself does not specify a license. The survey paper is available under an arXiv license.
Limitations & Caveats
As a survey, this repository is a snapshot of the field as of its publication date. The rapidly evolving nature of LLM research means new papers and techniques are constantly emerging.
1 year ago
Inactive