Self-hosted AI coding assistant for on-prem code completion
Top 1.1% on sourcepulse
Tabby provides a self-hosted, on-premises AI coding assistant as an alternative to cloud-based solutions like GitHub Copilot. It targets developers and teams seeking data privacy and control over their AI tools, offering features like code completion and chat-based assistance integrated into IDEs.
How It Works
Tabby operates as a self-contained service, eliminating the need for external databases or cloud dependencies. It exposes an OpenAPI interface for easy integration with various development environments and infrastructure. The system supports consumer-grade GPUs for inference, making it accessible for individual developers and smaller teams. It leverages Retrieval-Augmented Generation (RAG) for code completion, incorporating repository-level context and locally relevant snippets to enhance accuracy.
Quick Start & Requirements
docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model StarCoder-1B --device cuda --chat-model Qwen2-1.5B-Instruct
Highlighted Details
Maintenance & Community
Active development with frequent releases. Community engagement via Slack. Roadmap available.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
While supporting consumer GPUs, performance will vary significantly based on hardware. The project is under active development, with features like LDAP authentication recently introduced, suggesting ongoing evolution and potential for breaking changes in earlier versions.
1 day ago
1 day