Framework for vector search engine benchmarking
Top 84.0% on sourcepulse
This project provides a framework for benchmarking various vector search engines, enabling users to compare their performance under identical hardware and scenario constraints. It targets engineers and researchers needing to select the most efficient vector database for their specific use cases, offering objective performance metrics.
How It Works
The framework operates on a server-client model, where each vector database is run as a server via Docker Compose. A separate client instance then executes benchmark scenarios, which can be configured for single or distributed server modes and varying client loads. This approach ensures a consistent testing environment, allowing for direct comparison of engine capabilities.
Quick Start & Requirements
pip install poetry
then poetry install
.cd ./engine/servers/<engine-configuration-name>
then docker compose up
.poetry shell
then python run.py --engines "qdrant-rps-m- -ef- " --datasets "dbpedia-openai-100K-1536-angular"
.configuration/
and datasets in datasets/
].Highlighted Details
./results/
directory.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is presented as a framework for benchmarking, but the README does not detail specific benchmarked engines or datasets beyond examples, nor does it provide performance results or comparisons. The absence of a specified license raises concerns about commercial use and compatibility.
1 week ago
1+ week