claude-deep-research-skill  by 199-biotechnologies

Enterprise AI research engine

Created 5 months ago
541 stars

Top 58.6% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> This project provides an enterprise-grade deep research skill for Claude Code, automating the generation of citation-backed reports. It features source credibility scoring, multi-provider search, and automated validation, claiming superior quality and verification over competitors like OpenAI and Gemini.

How It Works

An 8-phase pipeline (Plan → Retrieve → Triangulate → Outline Refinement → Synthesize → Critique → Refine → Package) automates research. It employs parallel search with agents, adaptive quality thresholds ("First Finish Search"), and a critique loop-back mechanism. Multi-persona red teaming enhances rigor in deeper modes. Reports are generated progressively with disk-persisted citations, outputting Markdown, HTML, and PDF.

Quick Start & Requirements

Installation involves cloning the repository into the Claude Code skills directory: git clone https://github.com/199-biotechnologies/claude-deep-research-skill.git ~/.claude/skills/deep-research. Basic usage requires no additional dependencies. Optional: search-cli for aggregated search across multiple providers (Brave, Serper, Exa, Jina, Firecrawl) can be installed via brew tap 199-biotechnologies/tap && brew install search-cli, requiring API key configuration.

Highlighted Details

  • Multi-Mode Research: Offers Quick (3 phases, 2-5 min), Standard (6 phases, 5-10 min), Deep (8 phases, 10-20 min), and UltraDeep (8+ phases, 20-45 min) research modes.
  • Advanced Retrieval: Utilizes parallel search (5-10 concurrent searches + agents) with adaptive quality thresholds.
  • Iterative Refinement: Features a critique loop-back mechanism and multi-persona red teaming for robust analysis.
  • Automated Validation: Includes validate_report.py (9 checks) and verify_citations.py (DOI/URL/hallucination detection) with a retry loop.
  • Comprehensive Output: Generates Markdown, auto-opened McKinsey-style HTML, and PDF reports. Handles long reports via recursive agent spawning.

Maintenance & Community

The project shows recent activity with version 2.3.1 released on March 19, 2026. No specific details regarding maintainers, community channels (e.g., Discord, Slack), or roadmaps are provided in the README.

Licensing & Compatibility

The project is released under the MIT license, allowing for broad use, modification, and distribution, including within commercial and closed-source applications.

Limitations & Caveats

Effectiveness relies on the quality and availability of underlying search providers and agent performance. Advanced search requires external API key setup. The README states performance claims against competitors but lacks independent benchmarks. Deeper research modes demand significant time investment (up to 45 minutes).

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
193 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.