MCP server for Qdrant vector search
Top 44.6% on sourcepulse
This project provides an official Qdrant implementation of the Model Context Protocol (MCP), enabling LLM applications to seamlessly integrate with Qdrant for semantic memory. It targets developers building AI-powered tools like IDEs or chat interfaces, offering a standardized way to connect LLMs with external data sources.
How It Works
The server acts as a semantic memory layer over Qdrant. It exposes two primary tools: qdrant-store
for persisting information and metadata into a specified Qdrant collection, and qdrant-find
for retrieving relevant information based on a query. It leverages fastembed
for generating embeddings, with sentence-transformers/all-MiniLM-L6-v2
as the default model. Configuration is primarily managed via environment variables.
Quick Start & Requirements
uvx mcp-server-qdrant
uvx
(Python package manager).QDRANT_URL
, COLLECTION_NAME
, EMBEDDING_MODEL
.Highlighted Details
stdio
(default) and sse
(Server-Sent Events) transport protocols.Maintenance & Community
This is an official Qdrant project. Further community and contribution details are available on the GitHub repository.
Licensing & Compatibility
Limitations & Caveats
Currently, only fastembed
models are supported for embedding generation. The project notes that tool descriptions may require customization for specific use cases.
1 month ago
1 week