Survey of hallucination in LLMs
Top 37.0% on sourcepulse
This repository serves as a comprehensive reading list and survey of research papers focused on hallucinations in Large Language Models (LLMs). It aims to provide researchers and practitioners with a structured overview of the problem, its various types, evaluation methods, sources, and mitigation strategies.
How It Works
The project categorizes LLM hallucinations into three main types: input-conflicting, context-conflicting, and fact-conflicting. It then meticulously lists and links to relevant research papers for each category, covering evaluation benchmarks, potential sources of hallucination, and diverse mitigation techniques applied during pretraining, fine-tuning, RLHF, and inference.
Quick Start & Requirements
This repository is a curated list of research papers and does not involve code execution or installation. All requirements are met by having internet access to view the linked papers.
Highlighted Details
Maintenance & Community
The project is maintained by HillZhang1999. Contact is available via email for suggestions or contributions.
Licensing & Compatibility
The repository itself does not specify a license, but it links to numerous research papers, each with its own licensing and usage terms.
Limitations & Caveats
This is a curated list of research papers and does not provide code or tools for direct experimentation. The rapidly evolving nature of LLM research means new papers and findings may not be immediately reflected.
8 months ago
1 day