LLM fact-checker using self-ask prompt chains
Top 89.6% on sourcepulse
This repository demonstrates a method for fact-checking Large Language Model (LLM) outputs using a self-ask and prompt-chaining approach. It is intended for developers and researchers exploring LLM reliability and accuracy, offering a way to verify LLM-generated answers by breaking them down into verifiable assumptions.
How It Works
The process involves an LLM first generating an answer to a user's question. It then self-interrogates to identify the underlying assumptions made in its initial response. Subsequently, the LLM sequentially verifies each of these assumptions. Finally, it generates a revised answer that incorporates the verified information, improving accuracy and addressing any factual inaccuracies found during the verification step.
Quick Start & Requirements
python3 fact_checker.py 'insert question here'
or use the provided fact_checker.ipynb
notebook.Highlighted Details
Maintenance & Community
This appears to be a proof-of-concept project by a single contributor, Jasper. No community channels or ongoing maintenance information are provided.
Licensing & Compatibility
The repository does not explicitly state a license.
Limitations & Caveats
The project is described as a "simple demonstration" and a "proof of concept," suggesting it may not be production-ready. The effectiveness is dependent on the LLM's ability to accurately identify and verify its own assumptions.
1 year ago
1+ week