Discover and explore top open-source AI tools and projects—updated daily.
samuelfajStreamline CLI data for LLM analysis
Top 67.9% on SourcePulse
Distill addresses the significant token waste generated by piping large command-line interface (CLI) outputs to Large Language Models (LLMs). It is designed for developers, researchers, and power users who leverage LLMs for analyzing CLI results, offering substantial token savings of up to 99% without losing critical information.
How It Works
The core approach involves piping the output of any non-interactive shell command through the distill agent. distill then uses an LLM to process this output, compressing it into a concise answer based on a precisely worded prompt from the user. This method prioritizes extracting the essential signal from verbose logs, test results, or diffs, drastically reducing the token count required for LLM processing.
Quick Start & Requirements
npm i -g @samuelfaj/distilldistill config commands to set provider, model, API keys, host, and other parameters.distill with an explicit prompt, e.g., command 2>&1 | distill "What changed? Return only the filenames."Highlighted Details
Maintenance & Community
No specific details regarding maintainers, community channels (like Discord/Slack), sponsorships, or roadmap were present in the provided README.
Licensing & Compatibility
The license under which this project is distributed is not specified in the provided README. This omission requires further investigation for commercial use or integration into closed-source projects.
Limitations & Caveats
distill should be bypassed when the exact, uncompressed output of a command is strictly necessary or when its use would interfere with interactive shell sessions or Text User Interface (TUI) workflows.
1 month ago
Inactive
rtk-ai
comet-ml
langfuse