Discover and explore top open-source AI tools and projects—updated daily.
GovcraftAI coding assistants enhanced with up-to-date Rust crate documentation
Top 98.5% on SourcePulse
This project provides an MCP server to combat outdated AI-generated Rust code suggestions. It fetches current crate documentation, uses embeddings and LLMs to provide accurate context via a query_rust_docs tool, enabling AI assistants to offer relevant, up-to-date information and speeding up development.
How It Works
The server targets a single Rust crate, downloading and parsing its cargo doc output. It generates embeddings using OpenAI's text-embedding-3-small and caches them locally. Upon receiving a query via the query_rust_docs tool, it performs semantic search on embeddings to find relevant documentation snippets. These snippets, along with the question, are passed to gpt-4o-mini-2024-07-18 for LLM-based summarization, ensuring answers are grounded in the latest official docs.
Quick Start & Requirements
Install via pre-compiled binaries from GitHub Releases or build from source (cargo build --release). Requires an OpenAI API key (OPENAI_API_KEY env var) and network access. Launch with ./rustdocs_mcp_server "crate_name@version_req" (e.g., "serde@^1.0"). First runs download, parse, embed, and cache docs/embeddings locally (incurring minor OpenAI costs), taking time. Subsequent runs use the cache for faster startup.
Highlighted Details
text-embedding-3-small.gpt-4o-mini-2024-07-18 on retrieved context.query_rust_docs tool via stdio.Maintenance & Community
The README does not specify maintenance details, contributors, community channels, or a roadmap. Sponsorship is mentioned as a support option.
Licensing & Compatibility
Released under the MIT License, which is permissive for commercial use and closed-source integration.
Limitations & Caveats
Requires OpenAI API access (key and costs). Each server instance supports only one crate; multiple instances are needed for broader coverage. Initial setup for a crate involves downloading, parsing, and embedding, which can be time-consuming.
3 months ago
Inactive
superagent-ai
johnbean393
BloopAI