Open-source research assistant for automated deep research, generating comprehensive reports
Top 7.7% on sourcepulse
This project provides an experimental, open-source research assistant designed to automate deep research and generate comprehensive reports on any topic. It offers two distinct implementations—a structured workflow and a parallel multi-agent architecture—allowing users to customize models, prompts, report structure, and search tools for tailored research outcomes.
How It Works
The project offers two primary architectures: a graph-based workflow and a multi-agent system. The workflow implementation follows a plan-and-execute model, with a distinct planning phase, human-in-the-loop review for the report plan, and sequential section generation with reflection. This approach emphasizes user control and report accuracy. The multi-agent implementation uses a supervisor-researcher model where multiple agents work in parallel to research and write sections simultaneously, prioritizing speed and efficiency.
Quick Start & Requirements
uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev --allow-blocking
(Mac) or pip install -e .
and langgraph dev
(Windows/Linux).init_chat_model()
and multiple search tools (Tavily, Perplexity, Exa, ArXiv, PubMed, Linkup, DuckDuckGo, Google Search)..env.example
to .env
, and configuring API keys and model choices.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The multi-agent implementation is currently limited to Tavily Search. Model selection is critical, as planner and writer models need to support structured outputs, and agent models require robust tool-calling capabilities; models like deepseek-R1 are noted as weak in function calling. Some LLMs may have token-per-minute limits (e.g., Groq on-demand tier).
1 week ago
1 week