rllm  by rllm-team

PyTorch library for relational table learning with LLMs

created 1 year ago
432 stars

Top 69.9% on sourcepulse

GitHubView on GitHub
Project Summary

This library provides a PyTorch framework for Relational Table Learning (RTL) using Large Language Models (LLMs). It targets researchers and practitioners in graph neural networks and tabular data analysis, enabling the modular construction and co-training of advanced models by breaking down state-of-the-art GNNs, LLMs, and TNNs into standardized components.

How It Works

rLLM standardizes various graph neural network (GNN), large language model (LLM), and tabular neural network (TNN) architectures into modular components. This allows users to combine, align, and co-train these models for relational table learning tasks, treating diverse graph structures as interconnected tables. This approach facilitates experimentation and the development of novel RTL methods.

Quick Start & Requirements

  • Install via pip.
  • Requires PyTorch.
  • Example execution: cd ./examples && python bridge/bridge_tml1m.py
  • Official documentation available: rLLM Documentation

Highlighted Details

  • Integrates with LangChain and Hugging Face Transformers for LLM-friendly applications.
  • Processes various graph types (social, citation, e-commerce) by representing them as linked tables.
  • Introduces three new relational table datasets for RTL model development.
  • Implements over 15 state-of-the-art GNN and TNN models, including OGC, ExcelFormer, TAPE, Label-Free-GNN, and Trompt.

Maintenance & Community

  • Maintained by students and faculty from Shanghai Jiao Tong University and Tsinghua University.
  • Supported by the CCF-Huawei Populus Grove Fund.
  • Featured in MIT Technology Review.
  • Course videos available on YouTube.

Licensing & Compatibility

  • License details are not explicitly stated in the README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The project is currently at v0.1, indicating it is in an early stage of development. Features like large-scale RTL training and LLM prompt optimization are still on the roadmap.

Health Check
Last commit

3 weeks ago

Responsiveness

1 week

Pull Requests (30d)
3
Issues (30d)
0
Star History
5 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Omar Sanseviero Omar Sanseviero(DevRel at Google DeepMind), and
1 more.

RL4LMs by allenai

0.0%
2k
RL library to fine-tune language models to human preferences
created 3 years ago
updated 1 year ago
Feedback? Help us improve.