rlm  by alexzhang13

Inference library for Recursive Language Models (RLMs)

Created 3 weeks ago

New!

869 stars

Top 41.3% on SourcePulse

GitHubView on GitHub
Project Summary

Recursive Language Models (RLMs) address the challenge of near-infinite context lengths in Large Language Models (LLMs). This library provides an inference engine enabling LLMs to programmatically examine, decompose, and recursively call themselves over extended inputs. It targets researchers and developers seeking to overcome traditional context window limitations.

How It Works

RLMs replace standard LLM completion calls with a rlm.completion interface. The core innovation involves offloading context into a REPL (Read-Eval-Print Loop) environment, allowing the LLM to interact with and manipulate the context programmatically. This paradigm facilitates recursive self-calls, enabling the model to decompose complex tasks and operate on vastly larger inputs than conventional methods.

Quick Start & Requirements

Installation uses the uv package manager: curl -LsSf https://astral.sh/uv/install.sh | sh, followed by uv init && uv venv --python 3.12 and uv pip install -e .. A quick test requires OPENAI_API_KEY and can be run via uv run examples/quickstart.py. Dependencies include Python 3.12 and optionally Docker. Links to the full paper, blog post, and documentation are provided.

Highlighted Details

  • Supports multiple REPL environments: local (default, non-isolated), docker (requires Docker), modal (requires Modal setup), and prime (beta, not yet implemented).
  • Integrates with major model providers including OpenAI, Anthropic, OpenRouter, Portkey, LiteLLM, and local models via vLLM.
  • Features a Node.js/shadcn/ui-based visualizer for inspecting RLM execution trajectories through .jsonl log files.

Maintenance & Community

Developed and maintained by authors from the MIT OASYS lab. Open-source contributions are actively welcomed.

Licensing & Compatibility

The provided README does not specify a software license. Users should verify licensing terms before adoption, especially concerning commercial use or integration with closed-source systems.

Limitations & Caveats

Prime Intellect Sandboxes are listed as a beta feature but are currently unimplemented. The default local REPL environment executes code within the host process, posing security risks for untrusted inputs and making it unsuitable for production environments.

Health Check
Last Commit

19 hours ago

Responsiveness

Inactive

Pull Requests (30d)
30
Issues (30d)
14
Star History
877 stars in the last 21 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Wing Lian Wing Lian(Founder of Axolotl AI), and
3 more.

ROLL by alibaba

2.3%
3k
RL library for large language models
Created 7 months ago
Updated 21 hours ago
Feedback? Help us improve.