research-companion  by andrehuang

AI agents for strategic research ideation and validation

Created 1 week ago

New!

438 stars

Top 68.0% on SourcePulse

GitHubView on GitHub
Project Summary

Strategic research thinking agents for Claude Code and Codex, andrehuang/research-companion assists researchers in evaluating ideas, triaging projects, and structuring brainstorming sessions. It aims to help users identify and pursue high-impact research opportunities, acting as a critical AI colleague to ensure time is invested wisely. The project targets researchers and power users seeking to improve their research strategy and decision-making process.

How It Works

The project employs three core agents: the Idea Critic, which stress-tests research concepts across seven dimensions (novelty, impact, timing, feasibility, competitive landscape, the nugget, and narrative potential) to provide a Pursue/Refine/Kill verdict; the Research Strategist, for project-level triage, comparative advantage mapping, and impact forecasting; and the Brainstormer, focused on generating novel cross-field connections and challenging flawed assumptions. A primary /research-companion skill orchestrates these agents through a structured six-phase ideation session, incorporating Nicholas Carlini's "conclusion-first test" to ensure ideas have a clear, compelling outcome before significant investment.

Quick Start & Requirements

  • Claude Code:
    1. claude plugin marketplace add https://github.com/andrehuang/research-companion
    2. claude plugin install research-companion@andrehuang-research-companion
  • Codex: Refer to docs/codex-installation.md for detailed instructions, including repo-scoped and local marketplace installation.
  • Prerequisites: Requires a compatible Claude Code or Codex environment. Specific hardware or software dependencies beyond the AI environment are not detailed in the README.

Highlighted Details

  • Agent-Driven Evaluation: Utilizes Idea Critic, Research Strategist, and Brainstormer agents for distinct strategic tasks.
  • Seven Evaluation Dimensions: Ideas are rigorously assessed on novelty, impact, timing, feasibility, competitive landscape, the nugget, and narrative potential.
  • Eight Research Principles: Guides agent evaluations with principles focused on Problem Selection, Execution Strategy, and Strategic Positioning.
  • Persistent Evaluations: Evaluation results are saved to disk (research-evaluations/) to accumulate insights across sessions.
  • Codex Integration: Includes native plugin manifest and interface metadata for seamless integration with Codex.
  • Conclusion-First Test: A core methodology requiring users to articulate a compelling conclusion upfront.

Maintenance & Community

The provided README does not detail specific contributors, community channels (e.g., Discord, Slack), or roadmap information.

Licensing & Compatibility

The project is released under the MIT License, which is generally permissive for commercial use and integration into closed-source projects.

Limitations & Caveats

The agents are intentionally "opinionated and direct," prioritizing honest feedback over user comfort. The tool focuses on strategic decision-making regarding what research to pursue, rather than assisting with the technical how-to of writing papers. Its bluntness may not suit all users, and it is designed to help kill unpromising ideas early.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
1
Star History
440 stars in the last 8 days

Explore Similar Projects

Feedback? Help us improve.