Survey paper on factuality in large language models
Top 82.0% on sourcepulse
This repository serves as a comprehensive survey of factuality in Large Language Models (LLMs), detailing knowledge representation, retrieval augmentation, and domain-specific challenges. It is intended for researchers and practitioners in NLP and AI who need a structured overview of LLM factuality issues, existing solutions, and evaluation benchmarks.
How It Works
The survey categorizes factuality issues into model-level (e.g., knowledge deficit, reasoning errors) and retrieval-level causes (e.g., distraction, misinterpretation). It then explores various enhancement methods, including continual pre-training, supervised fine-tuning, and model editing, often supported by external knowledge sources. The paper also provides an extensive review of relevant datasets and evaluation metrics used to assess LLM factuality across different domains.
Quick Start & Requirements
This repository is a collection of survey information and does not have a direct installation or execution command. The primary resource is the linked arXiv paper for detailed content.
Highlighted Details
Maintenance & Community
The repository is associated with the survey paper "Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity." Contributions via pull requests or issues are welcomed to improve the survey content.
Licensing & Compatibility
The repository itself does not specify a license. The survey paper is available on arXiv.
Limitations & Caveats
As a survey repository, it primarily aggregates and organizes information from other research papers. Real-time updates to the arXiv paper may not be reflected here.
1 year ago
1 day