Discover and explore top open-source AI tools and projects—updated daily.
Context-Engine-AIAI coding assistant retrieval stack with hybrid code search
Top 81.5% on SourcePulse
Context-Engine provides a self-hosted, hybrid code retrieval stack for AI coding assistants, addressing limitations of traditional code search. It offers precise, adaptive, and universally compatible code context retrieval, enhancing AI developer tools without cloud dependencies.
How It Works
The system leverages ReFRAG-inspired micro-chunking to isolate relevant code spans (5-50 lines) and employs a hybrid search strategy combining dense, lexical, and cross-encoder reranking. It runs as a self-contained Docker stack on the user's machine, eliminating cloud dependencies and vendor lock-in. An adaptive learning component continuously improves retrieval accuracy based on usage patterns.
Quick Start & Requirements
git clone the repo, then run make bootstrap for a one-shot setup or docker compose up -d followed by docker compose run --rm indexer for step-by-step deployment.docs/vscode-extension.md), General Docs (README, Configuration, IDE Clients, MCP API, Architecture).Highlighted Details
llama.cpp) or cloud APIs, and adaptive rerank learning.Maintenance & Community
No specific details on contributors, sponsorships, or community channels (e.g., Discord/Slack) were found in the provided text.
Licensing & Compatibility
Limitations & Caveats
Deployment is Docker-dependent. The BUSL-1.1 license may restrict commercial use cases without separate arrangements. Performance benchmarks provided are specific to dense retrieval without reranking.
1 day ago
Inactive
ggml-org