Discover and explore top open-source AI tools and projects—updated daily.
Opencode-DCPDynamic context pruning for AI assistants
Top 82.2% on SourcePulse
This plugin addresses the challenge of escalating token usage and context bloat in conversational AI systems like OpenCode. It intelligently prunes conversation history, specifically targeting obsolete tool outputs, to optimize token consumption and improve performance. The target audience is OpenCode users seeking to manage costs and enhance the efficiency of their AI interactions, particularly in long-running sessions.
How It Works
The plugin employs a multi-pronged approach combining explicit AI-driven tools and automatic background strategies. It exposes discard and extract tools, allowing the AI to actively remove completed or noisy tool content or distill valuable information into concise summaries before pruning. Automatic strategies include deduplication to retain only the most recent output of repeated tool calls (e.g., file reads), supersedeWrites to prune write operations for files that are subsequently read, and purgeErrors to remove inputs of tools that have consistently failed after a set number of turns. These strategies operate automatically on every request with zero direct LLM cost.
Quick Start & Requirements
Installation involves adding "@tarquinen/opencode-dcp@latest" to the plugin array in your opencode.jsonc configuration file. If using OAuth plugins, this plugin should be listed last. After configuration, simply restart OpenCode. The plugin automatically manages context pruning. Configuration can be managed via ~/.config/opencode/dcp.jsonc, $OPENCODE_CONFIG_DIR/dcp.jsonc, or project-specific .opencode/dcp.jsonc files, with settings merging hierarchically.
Highlighted Details
deduplication, purgeErrors) run on every request with zero LLM cost.Maintenance & Community
No specific details regarding maintainers, community channels (like Discord/Slack), or roadmap were present in the provided README.
Licensing & Compatibility
The project is released under the MIT license, which generally permits commercial use and integration into closed-source projects.
Limitations & Caveats
The primary limitation is the potential invalidation of LLM prompt cache prefixes due to message content changes resulting from pruning. This can lead to increased cache misses, though the plugin's design aims for overall token savings and performance gains to compensate.
1 day ago
Inactive
SylphAI-Inc