Discover and explore top open-source AI tools and projects—updated daily.
kossisoroyceCompiles classical ML models into fast, native C inference code
New!
Top 52.4% on SourcePulse
Timber addresses the need for high-performance, portable inference for classical machine learning models by compiling them into optimized native C99 code. It targets teams in fraud detection, edge computing, and regulated industries requiring fast, predictable, and auditable model deployments. The primary benefit is a significant reduction in inference latency and runtime overhead compared to traditional Python-based serving.
How It Works
Timber employs an Ahead-of-Time (AOT) compilation strategy, transforming trained models from frameworks like XGBoost, LightGBM, scikit-learn, CatBoost, and ONNX (specifically TreeEnsemble operators) into standalone C99 inference code. This compiled code is then served via a local HTTP API, following an Ollama-style workflow for loading and querying models. This approach eliminates the Python runtime from the critical inference path, enabling microsecond-level latency.
Quick Start & Requirements
pip install timber-compilertimber load <model_file> --name <model_name>timber serve <model_name>Highlighted Details
Maintenance & Community
pip install -e ".[dev]".Licensing & Compatibility
Limitations & Caveats
1 week ago
Inactive