llm-checker  by Pavelevich

Local LLM advisor for your hardware

Created 7 months ago
1,012 stars

Top 36.6% on SourcePulse

GitHubView on GitHub
Project Summary

LLM Checker is an advanced CLI tool designed to simplify the complex process of selecting optimal Large Language Models (LLMs) for local hardware. It addresses the challenge of thousands of model variants, quantization levels, and hardware configurations by analyzing a user's system and providing deterministic, hardware-calibrated recommendations. This tool benefits engineers, researchers, and power users seeking to efficiently run LLMs locally by offering precise compatibility scores and actionable insights.

How It Works

LLM Checker employs a deterministic pipeline that begins with hardware detection (CPU, GPU, RAM, acceleration backends) and integrates with the Ollama catalog. It analyzes a dynamic pool of over 200 models, falling back to a curated catalog if necessary. A core feature is its 4D scoring engine, evaluating models across Quality, Speed, Fit (memory utilization), and Context, weighted by specific use cases like coding or reasoning. Memory estimation is calibrated using a bytes-per-parameter formula, ensuring accurate predictions for various quantization levels, including support for Mixture-of-Experts (MoE) models.

Quick Start & Requirements

Installation is straightforward: npm install -g llm-checker or run directly via npx llm-checker. The primary requirement is Node.js 16+. Optional installation of sql.js unlocks advanced database search and recommendation features. The project provides comprehensive documentation, including a Docs Hub, Usage Guide, and Technical Reference.

Highlighted Details

  • Supports a dynamic pool of 200+ Ollama models, with a curated fallback catalog.
  • Features a 4D scoring engine (Quality, Speed, Fit, Context) adaptable to different use cases.
  • Comprehensive hardware detection across Apple Silicon, NVIDIA CUDA, AMD ROCm, Intel Arc, and CPU backends.
  • Calibrated memory estimation for accurate model fitting, including runtime-aware MoE speed calculations.
  • Includes a built-in Model Context Protocol (MCP) server for seamless integration with tools like Claude Code.

Maintenance & Community

The project is actively distributed via npm and GitHub Releases. Community support and interaction are facilitated through a Discord server.

Licensing & Compatibility

LLM Checker is licensed under NPDL-1.0 (No Paid Distribution License). This license permits free use, modification, and redistribution. However, selling the software or offering it as a paid hosted or API service requires a separate commercial license.

Limitations & Caveats

The NPDL-1.0 license imposes restrictions on commercial exploitation. Advanced search functionalities are optional and require additional installation. The tool prioritizes the dynamic Ollama catalog, with the curated fallback used only when the primary source is unavailable.

Health Check
Last Commit

5 days ago

Responsiveness

Inactive

Pull Requests (30d)
10
Issues (30d)
40
Star History
987 stars in the last 30 days

Explore Similar Projects

Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
6 more.

xTuring by stochasticai

0.1%
3k
SDK for fine-tuning and customizing open-source LLMs
Created 2 years ago
Updated 5 days ago
Feedback? Help us improve.