Discover and explore top open-source AI tools and projects—updated daily.
vava-nessaCLI tool for finding fastest coding LLMs
New!
Top 86.2% on SourcePulse
This project provides a real-time terminal-based tool to discover and select the fastest coding-focused Large Language Models (LLMs) from numerous providers. It targets developers and researchers seeking to optimize their AI coding assistants by offering live performance data, including latency and uptime, for over 100 models across nine different services. The primary benefit is enabling users to make informed decisions about which LLM offers the best performance for their specific coding tasks, directly integrating with popular tools like OpenCode and OpenClaw.
How It Works
The tool operates by pinging a comprehensive list of coding LLMs from providers such as NVIDIA NIM, Groq, Cerebras, and others, simultaneously in parallel. It utilizes a Text User Interface (TUI) that continuously monitors and re-pings these models every two seconds, displaying live "Latest," "Avg" (rolling average), and "Up%" (uptime) columns. This approach allows users to observe real-time performance differences and identify the most responsive models without manual testing or complex setup. It also features keyless latency testing, showing server reachability even without an API key.
Quick Start & Requirements
npm i -g free-coding-models. Can also be run directly using npx free-coding-models.Highlighted Details
Maintenance & Community
The project is actively maintained, with contributions welcomed via GitHub issues and pull requests. A Discord server is available for community discussion (https://discord.gg/5MbTnDC3Md). GitHub Actions automate publishing to npm.
Licensing & Compatibility
The project is released under the MIT license, permitting broad use, modification, and distribution, including for commercial purposes, with standard attribution requirements.
Limitations & Caveats
The tool is explicitly labeled as a BETA TUI and may contain bugs or experience crashes. Users should proceed with caution. Integration with OpenClaw requires a specific patching script (patch-openclaw.js) to fully populate the model allowlist due to OpenClaw's strict configuration. Free tiers from API providers have inherent usage limits.
1 day ago
Inactive
the-crypt-keeper
openai
LiveCodeBench