RAG chatbot for querying data, locally or via cloud deployment
Top 7.3% on sourcepulse
Verba is an open-source Retrieval Augmented Generation (RAG) chatbot designed for community use, enabling users to query and gain insights from their datasets. It offers a streamlined, end-to-end RAG experience, supporting local deployments with Ollama/Huggingface or cloud-based LLM providers like OpenAI, Anthropic, and Cohere.
How It Works
Verba integrates Weaviate's vector database with various RAG frameworks, data ingestion tools, and LLM providers. It supports flexible data chunking (token, sentence, semantic, recursive) and retrieval methods, allowing users to customize their RAG pipeline for specific use cases. The architecture emphasizes modularity, enabling easy swapping of embedding models, LLMs, and data loaders.
Quick Start & Requirements
pip install goldenverba
docker compose up -d --build
Highlighted Details
Maintenance & Community
This is a community-driven project, and while Weaviate supports it, maintenance urgency may vary. Contributions are welcomed via GitHub issues and discussions.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and integration with closed-source applications.
Limitations & Caveats
Weaviate Embedded is experimental and not supported on Windows. Verba is designed for single-user usage only, with no current plans for multi-user or role-based access. It does not offer external API endpoints for application interaction.
2 weeks ago
1 week