Full-stack agent quickstart
Top 3.0% on sourcepulse
This project provides a full-stack example of building research-augmented conversational AI agents using Google Gemini 2.5 and LangGraph. It targets developers looking to create sophisticated research tools that can dynamically generate search queries, perform web research, reflect on findings, and synthesize answers with citations. The primary benefit is a working template for complex, multi-step AI workflows.
How It Works
The backend utilizes LangGraph to orchestrate a research agent. This agent iteratively refines its approach by first generating initial search queries with a Gemini model. It then executes these queries via the Google Search API, analyzes the results for knowledge gaps using another Gemini model, and generates follow-up queries if necessary. This reflective, iterative process continues until a comprehensive answer can be synthesized with citations.
Quick Start & Requirements
cd backend && pip install .
cd frontend && npm install
make dev
(runs both frontend and backend).Highlighted Details
Maintenance & Community
The project is a Google-maintained quickstart example. Further deployment details and LangGraph specifics can be found in the LangGraph Documentation.
Licensing & Compatibility
Limitations & Caveats
Production deployment requires Redis for pub-sub and PostgreSQL for state persistence and task queue management. The provided Docker Compose example also requires a LangSmith API key.
1 month ago
Inactive