distill  by samuelfaj

Streamline CLI data for LLM analysis

Created 1 month ago
438 stars

Top 67.9% on SourcePulse

GitHubView on GitHub
Project Summary

Distill addresses the significant token waste generated by piping large command-line interface (CLI) outputs to Large Language Models (LLMs). It is designed for developers, researchers, and power users who leverage LLMs for analyzing CLI results, offering substantial token savings of up to 99% without losing critical information.

How It Works

The core approach involves piping the output of any non-interactive shell command through the distill agent. distill then uses an LLM to process this output, compressing it into a concise answer based on a precisely worded prompt from the user. This method prioritizes extracting the essential signal from verbose logs, test results, or diffs, drastically reducing the token count required for LLM processing.

Quick Start & Requirements

  • Installation: npm i -g @samuelfaj/distill
  • Prerequisites: An LLM accessible via an OpenAI-compatible endpoint or specific supported providers (Ollama, LM Studio, LocalAI, vLLM, SGLang, llama.cpp, MLX-LM, Docker Model Runner).
  • Configuration: Use distill config commands to set provider, model, API keys, host, and other parameters.
  • Usage: Pipe command output to distill with an explicit prompt, e.g., command 2>&1 | distill "What changed? Return only the filenames."
  • Links: No direct links to official quick-start guides or demos were found in the provided README.

Highlighted Details

  • Achieves token savings of up to 99%, demonstrated by an example saving ~98.7% tokens (99 tokens vs. 7648 tokens).
  • Supports a wide array of LLM providers, including local options like Ollama and LM Studio, and cloud-based OpenAI-compatible endpoints.
  • Emphasizes the need for explicit, unambiguous prompts to ensure the LLM returns precisely the requested information (e.g., "Return only the filenames.").
  • Handles interactive prompts and password requests by passing them through when detected.

Maintenance & Community

No specific details regarding maintainers, community channels (like Discord/Slack), sponsorships, or roadmap were present in the provided README.

Licensing & Compatibility

The license under which this project is distributed is not specified in the provided README. This omission requires further investigation for commercial use or integration into closed-source projects.

Limitations & Caveats

distill should be bypassed when the exact, uncompressed output of a command is strictly necessary or when its use would interfere with interactive shell sessions or Text User Interface (TUI) workflows.

Health Check
Last Commit

1 month ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
5
Star History
187 stars in the last 30 days

Explore Similar Projects

Starred by Wing Lian Wing Lian(Founder of Axolotl AI), Patrick von Platen Patrick von Platen(Author of Hugging Face Diffusers; Research Engineer at Mistral), and
2 more.

rtk by rtk-ai

28.6%
22k
CLI proxy for massive LLM token reduction
Created 2 months ago
Updated 21 hours ago
Feedback? Help us improve.