Discover and explore top open-source AI tools and projects—updated daily.
ardanlabsGo-native engine for local AI model inference
Top 97.7% on SourcePulse
This project addresses the need for efficient, hardware-accelerated local inference of open-source LLMs within Go applications. It targets Go developers seeking to integrate AI capabilities directly into their software without relying on external APIs. Kronk provides a high-level, OpenAI-compatible interface and a model server, simplifying the adoption of local AI models.
How It Works
Kronk embeds llama.cpp into Go applications using the yzma module for efficient, hardware-accelerated GGUF model inference. It exposes a familiar, OpenAI-compatible API for chat completions, embeddings, and reranking. A model server component further simplifies deployment and interaction with local models.
Quick Start & Requirements
Install the CLI via go install github.com/ardanlabs/kronk/cmd/kronk@latest. Run the model server with $ make kronk-server or $ kronk server start. Requires the Go toolchain and GGUF models. Extensive hardware acceleration is supported across Linux, macOS, and Windows. Documentation and examples are available via https://kronkai.com.
Highlighted Details
yzma provides support for over 94% of llama.cpp functionality.Maintenance & Community
Owned by Ardan Labs (Bill Kennedy). Contact: hello@ardanlabs.com. Community contributions are encouraged. Social links for the owner are provided.
Licensing & Compatibility
Copyright 2025-2026 Ardan Labs. No specific open-source license is stated in the README. This requires further investigation for commercial use or integration into proprietary applications.
Limitations & Caveats
yzma supports ~94% of llama.cpp features; consult yzma's ROADMAP.md for specifics. The project appears recent, with copyright dates 2025-2026.
2 days ago
Inactive
lemonade-sdk