lambda-RLM  by lambda-calculus-LLM

Typed functional runtime for long-context LLM reasoning

Created 1 month ago
272 stars

Top 94.7% on SourcePulse

GitHubView on GitHub
Project Summary

λ-RLM: Typed Recursive Long-Context Reasoning for LLMs

This project introduces λ-RLM, a framework designed to enhance Large Language Models' (LLMs) ability to reason over long contexts. It addresses the limitations of standard LLM inference and existing Recursive Language Models (RLMs) by replacing free-form, potentially unreliable recursive code generation with a typed functional runtime grounded in λ-calculus. This approach offers more predictable compute, stronger formal structure, and improved accuracy and latency for long-context reasoning tasks.

How It Works

λ-RLM tackles long reasoning problems by decomposing them into smaller, bounded leaf subproblems that are solved using the LLM. Intermediate results are then combined using a fixed library of symbolic functional operators, such as SPLIT, MAP, FILTER, REDUCE, CONCAT, and CROSS. This transforms recursive reasoning from an unconstrained agentic loop into a structured functional program with explicit control flow. By restricting neural inference to bounded leaf subproblems and employing deterministic recursive decomposition, λ-RLM provides formal guarantees, including termination and closed-form cost bounds, which are typically absent in standard RLMs that rely on REPL-style execution and on-the-fly code generation.

Quick Start & Requirements

  • Installation:
    conda create -n lambda-rlm python=3.11 -y
    conda activate lambda-rlm
    pip install -e .
    
  • Prerequisites: Python 3.11, Conda. API keys for supported model providers (e.g., NVIDIA NIM, TOGETHER AI) are required and should be set as environment variables (e.g., export NVIDIA_API_KEY="nvapi-...").
  • Supported Datasets: sniah, oolong, browsecomp, codeqa.
  • Usage Example:
    import os
    from rlm import LambdaRLM
    document = "..." # Long document content
    prompt = f"Context:\n{document}\nQuestion: Summarize the main ideas.\nAnswer:"
    rlm = LambdaRLM(
        backend_kwargs={
            "model_name": "meta/llama-3.3-70b-instruct",
            "api_key": os.environ["NVIDIA_API_KEY"],
            "base_url": "https://integrate.api.nvidia.com/v1",
        }
    )
    result = rlm.completion(prompt)
    print(result.response)
    

Highlighted Details

  • Achieved 29 out of 36 wins over standard RLM in model-task comparisons.
  • Demonstrates up to +21.9 average accuracy points improvement across model tiers.
  • Offers up to 4.1× lower latency compared to standard RLM.
  • Provides formal guarantees including termination, closed-form cost bounds, and controlled accuracy scaling with recursion depth.

Maintenance & Community

The project utilizes components from an upstream Normal RLM repository (https://github.com/alexzhang13/rlm). Specific community channels or roadmap details for λ-RLM are not detailed in the provided information.

Licensing & Compatibility

The upstream Normal RLM components are licensed under the MIT License. The specific license for the λ-RLM implementation itself is not explicitly stated, which may require further clarification for commercial use or closed-source integration.

Limitations & Caveats

The README does not detail specific limitations such as alpha status or known bugs. However, the lack of an explicitly stated license for the core λ-RLM implementation could pose a caveat for adoption. While λ-RLM aims to improve predictability and formal structure over standard RLMs, the expressiveness and potential limitations of the λ-calculus approach for highly complex, emergent control flows are not elaborated upon.

Health Check
Last Commit

4 days ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
200 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Sebastian Raschka Sebastian Raschka(Author of "Build a Large Language Model (From Scratch)"), and
11 more.

optillm by algorithmicsuperintelligence

0.1%
3k
Optimizing inference proxy for LLMs
Created 1 year ago
Updated 1 month ago
Feedback? Help us improve.