reasoning-on-graphs  by RManLuo

Framework for faithful, interpretable LLM reasoning via knowledge graphs

created 1 year ago
433 stars

Top 69.8% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides the official implementation for "Reasoning on Graphs" (RoG), a framework that enhances Large Language Models (LLMs) with Knowledge Graphs (KGs) for faithful and interpretable reasoning. It targets researchers and practitioners in KGQA and LLM reasoning, offering a planning-retrieval-reasoning pipeline to improve accuracy and explainability.

How It Works

RoG employs a three-stage process: planning, retrieval, and reasoning. First, it generates relation paths grounded by KGs as faithful plans. These plans are then used to retrieve valid reasoning paths from the KGs. Finally, LLMs utilize these KG-grounded paths to perform reasoning, producing interpretable results. This approach ensures that the LLM's reasoning process is directly tied to the structured knowledge within the KG, promoting faithfulness and interpretability.

Quick Start & Requirements

  • Install: pip install -r requirements.txt
  • Prerequisites: GPU with at least 12GB memory for inference. Training requires 2x A100-80GB GPUs.
  • Data/Weights: Automatically downloaded from Hugging Face.
  • Inference:
    • Planning: ./scripts/planning.sh
    • Reasoning: ./scripts/rog-reasoning.sh
  • Docs: Official Implementation

Highlighted Details

  • Integrates LLMs with KGs using a planning-retrieval-reasoning framework.
  • Supports plug-and-play reasoning with various LLMs (ChatGPT, Alpaca, Llama2, Flan-T5).
  • Provides interpretable reasoning examples and code.
  • Offers pre-trained weights and automatically downloads datasets (RoG-WebQSP, RoG-CWQ).

Maintenance & Community

The project is associated with the ICLR 2024 paper "Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning." Further KG+LLM reasoning work is linked.

Licensing & Compatibility

The repository does not explicitly state a license in the README. Users should verify licensing for commercial or closed-source use.

Limitations & Caveats

Training RoG requires significant hardware resources (2x A100-80GB GPUs). The README does not detail specific LLM compatibility beyond those listed for plug-and-play inference.

Health Check
Last commit

5 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
24 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.