Local AI inference engine for text, images, speech, and agent workflows
Top 32.9% on sourcepulse
AI Runner is an open-source, local-first application designed to run various AI models, including LLMs, Stable Diffusion, TTS, and STT, without cloud dependencies. It targets developers and end-users seeking an all-in-one, offline inference engine for prototyping, private data processing, or custom UI development.
How It Works
AI Runner leverages HuggingFace and Llama-index libraries to provide a unified interface for diverse AI tasks. Its architecture supports local LLM inference, Stable Diffusion image generation, text-to-speech, and speech-to-text. The project emphasizes a "local-first" approach, ensuring data privacy and offline functionality, while offering a plugin and extension API for customization and integration into other Python projects.
Quick Start & Requirements
./src/airunner/bin/docker.sh airunner
.Highlighted Details
pip install airunner
).Maintenance & Community
The project welcomes contributions and provides a Discord server for questions and ideas. Detailed contribution guidelines are available in the repository.
Licensing & Compatibility
The repository does not explicitly state a license in the README. This requires further investigation for commercial use or closed-source linking.
Limitations & Caveats
The README does not specify a license, which is a critical factor for adoption, especially for commercial or closed-source projects. While Docker simplifies setup, it requires the NVIDIA Container Toolkit for GPU acceleration.
2 weeks ago
1 day