AI interface for local RAG and LLMs
Top 66.3% on sourcepulse
Chipper offers a modular, containerized web interface and CLI for building and deploying RAG (Retrieval-Augmented Generation) pipelines, document processing, and web scraping. It targets tinkerers, educators, and developers seeking to enhance local or cloud-based LLMs with custom knowledge bases, providing an extensible platform for AI exploration and integration with tools like Ollama and Haystack.
How It Works
Chipper leverages Haystack for RAG pipeline orchestration, Ollama for local LLM inference, and Elasticsearch for vector storage. It functions as a proxy for the Ollama API, enabling third-party clients to access RAG capabilities. The architecture supports document chunking, web scraping, and audio transcription, with a focus on offline functionality via a vanilla JavaScript/TailwindCSS web UI and Edge TTS for client-side speech synthesis.
Quick Start & Requirements
docker compose up
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is explicitly noted as a personal project not designed for commercial or production use, recommending users conduct their own due diligence for such scenarios. A React-based web application is listed as an upcoming feature, suggesting the current UI may be less polished or feature-rich than a modern SPA.
1 month ago
1 day