End-to-end vector search engine for text and images
Top 10.3% on sourcepulse
Marqo is an end-to-end vector search engine designed to simplify the integration of semantic search into applications. It handles text and image embedding generation, storage, and retrieval through a unified API, eliminating the need for users to manage separate ML models or vector databases. This makes it suitable for developers looking to quickly implement advanced search capabilities.
How It Works
Marqo bundles embedding generation with vector search, offering a "documents in, documents out" approach. It leverages state-of-the-art embedding models from Huggingface, PyTorch, and OpenAI, supporting both CPU and GPU. Data is stored in in-memory HNSW indexes for high-speed retrieval. The system handles preprocessing, embedding, and inference, allowing for flexible search behavior modification without model retraining. It also supports multimodal search, enabling combined text and image indexing and querying.
Quick Start & Requirements
docker run --name marqo -it -p 8882:8882 marqoai/marqo:latest
. Install the client with pip install marqo
.Highlighted Details
Maintenance & Community
Marqo is a community-driven project with active development. Support and discussion are available via their Discourse forum and Slack community.
Licensing & Compatibility
The project appears to be open-source, but a specific license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking would require license clarification.
Limitations & Caveats
Marqo requires Docker, which may be a barrier for some environments. The README warns against running other applications on Marqo's Vespa cluster due to automatic configuration changes.
2 days ago
Inactive