Local web research assistant using local LLMs
Top 6.7% on sourcepulse
This project provides a fully local web research and report writing assistant for users who want to leverage their own LLMs. It automates the process of generating search queries, gathering and summarizing web results, identifying knowledge gaps, and iteratively refining research to produce a comprehensive markdown report with cited sources.
How It Works
Inspired by IterDRAG, the assistant uses a local LLM (via Ollama or LMStudio) to generate web search queries based on a given topic. It then retrieves and summarizes relevant web content. The LLM reflects on the summary to identify knowledge gaps, generating new queries to address them. This iterative process repeats for a configurable number of cycles, progressively enriching the research.
Quick Start & Requirements
.env.example
to .env
, and configure environment variables.langgraph dev
after installing dependencies (uvx
or pip install -e .
and pip install -U "langgraph-cli[inmem]"
).Highlighted Details
Maintenance & Community
This project is part of the LangChain AI ecosystem. Further community and development details can be found on the LangChain GitHub and associated channels.
Licensing & Compatibility
The repository does not explicitly state a license in the README. Compatibility for commercial use or closed-source linking would require clarification of the licensing terms.
Limitations & Caveats
Some LLMs may struggle with structured JSON output required by the agent, though fallback mechanisms exist. Browser compatibility issues are noted for Safari users with LangGraph Studio UI. A TypeScript port is available but omits Perplexity search.
1 month ago
1 day