CLI tool for LLM-powered web search and answer synthesis
Top 79.2% on sourcepulse
This project provides a Python-based web assistant that leverages local Large Language Models (LLMs) via Llama.cpp or Ollama to answer queries, especially those requiring up-to-date information not present in the LLM's training data. It's designed for users who want to combine the power of local LLMs with real-time web search capabilities.
How It Works
The assistant takes user queries, determines if a web search is needed, and formulates search queries with specific timeframes. It retrieves the top 10 search results from DuckDuckGo, scrapes the content of the two most relevant results, and evaluates if the information is sufficient to answer the query. If not, it iteratively refines search terms and timeframes, performing up to five searches, before synthesizing an answer using its LLM knowledge and the gathered web data.
Quick Start & Requirements
pip install -r requirements.txt
python Web-LLM.py
/
for web search.llm_config.py
for LLM type, model path, and other parameters.Highlighted Details
Maintenance & Community
The project welcomes contributions via Pull Requests. The author is responsive to issues and suggestions on GitHub.
Licensing & Compatibility
Licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
The project is primarily for educational purposes. Users must comply with the terms of service for all APIs and services used. The effectiveness of answers depends on the quality of LLM responses and the relevance of scraped web content.
9 months ago
1 day