AI search engine for local/cloud LLMs
Top 14.7% on sourcepulse
Farfalle is an open-source AI-powered search engine designed to provide an alternative to services like Perplexity. It allows users to self-host their search experience, leveraging either local Large Language Models (LLMs) or cloud-based APIs for question answering and search result summarization. The target audience includes privacy-conscious users, developers, and researchers who want a customizable and self-managed search solution.
How It Works
Farfalle integrates multiple search providers, including SearXNG, Tavily, Serper, and Bing, to gather raw search results. It then utilizes LLMs to process these results, answer user queries directly, or employ an agent-based approach for more complex search planning and execution. This hybrid model offers flexibility, allowing users to choose between the cost-effectiveness and privacy of local LLMs (like Llama3, Gemma, Mistral, Phi3 via Ollama) or the power of cloud models (OpenAI, Groq) through LiteLLM integration.
Quick Start & Requirements
docker-compose -f docker-compose.dev.yaml up -d
after cloning the repository and configuring .env
.Highlighted Details
Maintenance & Community
The project is actively maintained by rashadphz. Community interaction is encouraged via GitHub issues and Twitter. A roadmap is available outlining planned features.
Licensing & Compatibility
The repository does not explicitly state a license in the provided README. Users should verify licensing for commercial use or integration with closed-source projects.
Limitations & Caveats
The project is described as a "clone" of Perplexity, implying feature parity may not be complete. The "Chat with local files" feature is listed as a future enhancement. Users relying solely on local models will need to manage LLM downloads and Ollama setup.
10 months ago
1 day