rtk  by rtk-ai

CLI proxy for massive LLM token reduction

Created 1 month ago
1,363 stars

Top 29.1% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a high-performance CLI proxy designed to drastically reduce Large Language Model (LLM) token consumption for developers. By filtering and compressing the output of common development commands before they are sent to an LLM, it offers significant cost savings and faster processing for tasks involving code analysis and generation. The primary benefit is a 60-90% reduction in token usage for typical developer workflows.

How It Works

RTK operates by intercepting and processing command-line outputs using techniques such as smart filtering to remove noise, grouping similar items, truncating redundant information, and deduplicating repeated lines. The recommended "hook-first" installation method integrates with Claude Code via a PreToolUse hook, transparently rewriting commands (e.g., git status to rtk git status) before execution. This ensures that LLMs receive highly condensed, relevant information without the user or the LLM needing to explicitly invoke RTK commands.

Quick Start & Requirements

  • Primary Install: curl -fsSL https://raw.githubusercontent.com/rtk-ai/rtk/refs/heads/master/install.sh | sh (installs to ~/.local/bin).
  • Verification: Crucially, run rtk --version and rtk gain after installation to confirm the correct "Rust Token Killer" version is installed and not the conflicting "Rust Type Kit" project.
  • Prerequisites: A Rust toolchain is required for manual compilation (cargo install). The primary installation method has zero explicit dependencies.
  • Initialization: For Claude Code integration, run rtk init --global and follow prompts to patch ~/.claude/settings.json to register the RTK hook.
  • Links: Website: https://www.rtk-ai.app, GitHub: https://github.com/rtk-ai/rtk.

Highlighted Details

  • Achieves 60-90% token savings on common developer commands like ls, git, grep, and testing frameworks.
  • Demonstrates an 80% reduction in token usage for a typical 30-minute Claude Code session, translating from ~118,000 tokens to ~23,900.
  • Features a "tee" mechanism that saves full command output on failure to a file, allowing LLM agents to access details without re-executing commands.
  • Provides detailed token savings analytics via rtk gain and rtk discover commands, including historical data and opportunity scanning.

Maintenance & Community

The project maintains a public issue tracker on GitHub for bug reports and feature requests. A detailed security review process is outlined for external contributions, involving automated checks and manual audits. Contact is available via contact@rtk-ai.app.

Licensing & Compatibility

The project is released under the MIT License, which generally permits broad use, modification, and distribution, including for commercial purposes, with minimal restrictions.

Limitations & Caveats

A significant caveat is the potential for name collision with another project also named "rtk" (Rust Type Kit); users must verify the correct installation via rtk gain. The recommended Claude Code integration requires modifying ~/.claude/settings.json, and users must restart Claude Code after installation for the hook to take effect. Actual savings are project-dependent.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
180
Issues (30d)
79
Star History
1,389 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.