Context-Engine  by Context-Engine-AI

AI coding assistant retrieval stack with hybrid code search

Created 4 months ago
340 stars

Top 81.5% on SourcePulse

GitHubView on GitHub
Project Summary

Context-Engine provides a self-hosted, hybrid code retrieval stack for AI coding assistants, addressing limitations of traditional code search. It offers precise, adaptive, and universally compatible code context retrieval, enhancing AI developer tools without cloud dependencies.

How It Works

The system leverages ReFRAG-inspired micro-chunking to isolate relevant code spans (5-50 lines) and employs a hybrid search strategy combining dense, lexical, and cross-encoder reranking. It runs as a self-contained Docker stack on the user's machine, eliminating cloud dependencies and vendor lock-in. An adaptive learning component continuously improves retrieval accuracy based on usage patterns.

Quick Start & Requirements

  • Primary Install:
    • Easiest: Install the "Context Engine Uploader" VS Code extension; opening a project prompts automatic setup.
    • Manual: git clone the repo, then run make bootstrap for a one-shot setup or docker compose up -d followed by docker compose run --rm indexer for step-by-step deployment.
  • Prerequisites: Docker.
  • Dependencies: Supports Python, TypeScript/JavaScript, Go, Java, Rust, C#, PHP, Shell, Terraform, YAML, PowerShell.
  • Resource Footprint: Local deployment via Docker implies resource consumption based on indexing and running services.
  • Links: VS Code Extension Docs (docs/vscode-extension.md), General Docs (README, Configuration, IDE Clients, MCP API, Architecture).

Highlighted Details

  • Precision Retrieval: Utilizes ReFRAG-inspired micro-chunking and hybrid search (dense, lexical, reranker) for accurate code span identification.
  • Self-Hosted & Local: Runs entirely on the user's machine via Docker, ensuring no cloud dependency or vendor lock-in.
  • Universal Client Support: Integrates with numerous AI coding assistants and IDEs via the MCP protocol (e.g., Claude Code, Cursor, Windsurf, Cline).
  • Adaptive Learning: Features an adaptive learning system that improves retrieval accuracy with continued use.
  • LLM Integration: Optional integration with local LLMs (e.g., llama.cpp) or cloud APIs, and adaptive rerank learning.
  • Enterprise Features: Includes optional built-in authentication and a unified MCP endpoint combining indexer and memory services.

Maintenance & Community

No specific details on contributors, sponsorships, or community channels (e.g., Discord/Slack) were found in the provided text.

Licensing & Compatibility

  • License: Business Source License 1.1 (BUSL-1.1).
  • Compatibility: BUSL-1.1 is a source-available license that converts to Apache 2.0 after a specified period. It typically imposes restrictions on commercial use, particularly for offering the software as a managed service, requiring explicit permission or licensing beyond the standard terms.

Limitations & Caveats

Deployment is Docker-dependent. The BUSL-1.1 license may restrict commercial use cases without separate arrangements. Performance benchmarks provided are specific to dense retrieval without reranking.

Health Check
Last Commit

1 day ago

Responsiveness

Inactive

Pull Requests (30d)
23
Issues (30d)
2
Star History
42 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.