lean-ctx  by yvgude

LLM context optimizer for drastic token reduction

Created 2 weeks ago

New!

535 stars

Top 59.1% on SourcePulse

GitHubView on GitHub
Project Summary

lean-ctx: Hybrid Context Optimizer for LLMs

This project addresses the significant cost and inefficiency of Large Language Model (LLM) token consumption by providing a hybrid optimization engine. It targets developers, researchers, and power users who interact with LLMs via command-line interfaces and integrated development environments. The primary benefit is a drastic reduction in LLM token usage, achieving up to 99% savings, thereby lowering operational costs and speeding up AI-assisted workflows.

How It Works

lean-ctx employs a multi-pronged strategy within a single, zero-dependency Rust binary. The Shell Hook transparently compresses CLI output using over 90 predefined patterns before it reaches an LLM. Complementing this is the MCP Server, which offers 21 specialized tools for intelligent context management, including cached file reads, adaptive mode selection, incremental deltas, and cross-session memory. These components are orchestrated by three core intelligence protocols: CEP (Cognitive Efficiency Protocol) for adaptive LLM communication optimization, CCP (Context Continuity Protocol) for persistent cross-session memory with LITM-aware positioning, and TDD (Token Dense Dialect) for further compression via symbol shorthand and identifier mapping.

Quick Start & Requirements

  • Primary Install: curl -fsSL https://leanctx.com/install.sh | sh (universal), brew install lean-ctx (macOS/Linux), npm install -g lean-ctx-bin, or cargo install lean-ctx.
  • Setup: Run lean-ctx setup for automatic shell and editor configuration.
  • Prerequisites: Zero external dependencies for the core binary. Supports 14 languages via tree-sitter AST parsing (optional feature for smaller binary size).
  • Links: Website: leanctx.com, Docs: leanctx.com/docs/getting-started.

Highlighted Details

  • Achieves 89-99% LLM token consumption reduction through combined shell hook and MCP server strategies.
  • Utilizes tree-sitter AST for accurate parsing and signature extraction across 14 programming languages.
  • Implements Cognitive Efficiency Protocol (CEP) for adaptive LLM communication, task classification, and quality scoring.
  • Features Context Continuity Protocol (CCP) for cross-session memory, persisting context and findings across conversations with LITM-aware positioning.
  • Offers Token Dense Dialect (TDD) for an additional 8-25% savings via symbol shorthand and identifier mapping.
  • Provides a visual terminal dashboard (lean-ctx gain) for real-time savings, USD cost estimates, and historical trends.

Maintenance & Community

The project maintains an active community presence via Discord. Contributions are welcomed via GitHub issues and pull requests.

Licensing & Compatibility

Licensed under the MIT License, permitting commercial use and integration into closed-source projects. The tool operates locally with zero network requests or telemetry.

Limitations & Caveats

Rust-compiled binaries, including lean-ctx, may occasionally trigger false positives from ML-based heuristic scanners on platforms like VirusTotal. A smaller binary can be built by disabling tree-sitter support using cargo install lean-ctx --no-default-features.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
6
Issues (30d)
69
Star History
546 stars in the last 19 days

Explore Similar Projects

Starred by Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), Jason Knight Jason Knight(Director AI Compilers at NVIDIA; Cofounder of OctoML), and
1 more.

blt by facebookresearch

0.1%
2k
Code for Byte Latent Transformer research paper
Created 1 year ago
Updated 5 months ago
Starred by Georgios Konstantopoulos Georgios Konstantopoulos(CTO, General Partner at Paradigm), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
5 more.

streaming-llm by mit-han-lab

0.1%
7k
Framework for efficient LLM streaming
Created 2 years ago
Updated 1 year ago
Starred by Wing Lian Wing Lian(Founder of Axolotl AI), Patrick von Platen Patrick von Platen(Author of Hugging Face Diffusers; Research Engineer at Mistral), and
2 more.

rtk by rtk-ai

28.6%
22k
CLI proxy for massive LLM token reduction
Created 2 months ago
Updated 21 hours ago
Feedback? Help us improve.