Desktop app for local LLM training and inference
Top 91.7% on sourcepulse
Kolosal AI is an open-source desktop application for running large language models (LLMs) offline on personal devices. It targets users seeking a lightweight, privacy-focused alternative to cloud-based AI services, enabling local inference and custom model training on a wide range of hardware.
How It Works
Kolosal AI is built using C++17 and CMake, compiled into a compact ~20 MB executable. It leverages the Genta Personal Engine, which is based on llama.cpp, to support various LLMs like Mistral, LLaMA, and Qwen. The application is designed for universal hardware compatibility, running on CPUs with AVX2 instructions and supporting AMD and NVIDIA GPUs, with an optional Vulkan backend for GPU acceleration.
Quick Start & Requirements
cmake -S .. -B . -DCMAKE_BUILD_TYPE=Release
), and build (e.g., cmake --build . --config Release
).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The build process requires manual management of external dependencies like OpenSSL and CURL if not system-installed. The Windows-specific resource file (resource.rc
) may require modification for Linux/macOS builds.
2 months ago
Inactive