consult7  by szeider

AI agents consult large context LLMs for extensive file analysis

Created 4 months ago
256 stars

Top 98.6% on SourcePulse

GitHubView on GitHub
Project Summary

Consult7 is an MCP server designed to bridge the gap between AI agents with limited context windows and the need to analyze large codebases or document repositories. It allows agents to leverage models with massive context capabilities, enabling comprehensive analysis of extensive file collections that would otherwise be impossible to process in a single query. This tool is particularly beneficial for developers and researchers working with large datasets or complex code structures who need AI assistance for tasks like summarization, code review, or gap analysis.

How It Works

Consult7 operates by collecting specified files (supporting wildcards in filenames) from provided absolute paths. It then assembles these files into a unified context and submits them, along with a user's query, to a large context window language model. The results are then returned to the AI agent. It supports multiple LLM providers including Openrouter, OpenAI, and Google, and can utilize models with context windows exceeding 1 million tokens.

Quick Start & Requirements

Consult7 is run via the uvx command-line tool, which automatically downloads and executes it in an isolated environment, eliminating the need for direct installation. The primary command is uvx consult7 <provider> <api-key>, where <provider> can be openrouter, google, or openai, and <api-key> is the user's respective API key. For users of Claude Code, it can be added via claude mcp add -s user consult7 uvx -- consult7 <provider> <api-key>. Prerequisites include an API key for the chosen LLM provider.

Highlighted Details

  • Supports large context window models (e.g., 1M+ tokens) for extensive analysis.
  • Enables "thinking" or deep reasoning modes for models, enhancing analytical depth.
  • Allows output to be saved directly to a file using the output_file parameter, preventing context flooding.
  • Defines specific file path rules, including support for wildcards in filenames and automatic exclusion of common development directories like __pycache__ and node_modules.

Maintenance & Community

No specific details regarding maintainers, community channels (like Discord/Slack), or roadmap were provided in the README.

Licensing & Compatibility

The README does not explicitly state the software license.

Limitations & Caveats

File paths must be absolute, and wildcards are restricted to filenames only, not directory paths. There are file size limits: 1MB per file and 4MB total, optimized for models with ~1M token context windows. The "thinking" mode for OpenAI models is an informational marker rather than a direct control.

Health Check
Last Commit

2 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
8 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.