AI-research-feedback  by claesbackman

AI-powered academic research review suite

Created 1 month ago
263 stars

Top 96.8% on SourcePulse

GitHubView on GitHub
Project Summary

A collection of Claude Code skills designed to automate academic research review processes. It targets researchers and academics by providing AI-powered tools for pre-submission feedback on papers, pre-analysis plans, and grant proposals, aiming to improve quality and adherence to standards.

How It Works

The project employs specialized Claude Code "skills," each driven by multiple AI agents simulating distinct reviewer roles (e.g., grammar, statistical rigor, contribution assessment). These skills process research documents (LaTeX papers, PAPs, grant proposals) and analysis code to identify issues and generate structured, constructive reports. This multi-agent, document-aware approach enables rigorous, journal- or funder-specific evaluations.

Quick Start & Requirements

  • Installation: Global install via curl to ~/.claude/commands/ or project-local install into .claude/commands/. Example: curl -o ~/.claude/commands/review-paper.md https://raw.githubusercontent.com/claesbackman/AI-research-feedback/main/Paper-review/review-paper.md.
  • Prerequisites: Requires Claude Code with general-purpose subagent access. Input documents include LaTeX papers (.tex), PAPs (.md, .txt, .tex, .pdf, .docx), grant proposals (.md, .txt, .tex, .pdf, .docx), and optionally Stata, R, or Python analysis code for review-paper-code.
  • Setup: Integration with an existing Claude Code environment; no explicit time/resource footprint detailed.

Highlighted Details

  • review-paper: Simulates journal-specific referee reports with six agents (grammar, consistency, claims, math, figures/tables, contribution). Supports top economics, finance, and macro journals.
  • review-paper-light: Faster, two-agent check for contribution, identification, causal overclaiming, and unsupported claims.
  • review-paper-code: Analyzes reproducibility and code quality by mapping LaTeX paper claims to Stata, R, or Python analysis code.
  • review-pap: Evaluates pre-analysis plans for writing, identification strategy, statistical analysis, and fit to registries/journals.
  • review-grant: Simulates grant proposal review panels for clarity, significance, feasibility, budget, and funder fit (e.g., NSF, NIH).
  • Customization: Allows editing journal/funder lists, adding project context via CLAUDE.md, or adjusting path discovery.

Maintenance & Community

Developed by Claes Bäckman. No specific community links or detailed maintenance information (contributors, roadmap) are provided in the excerpt.

Licensing & Compatibility

  • License: MIT License.
  • Compatibility: Permissive MIT license is suitable for commercial use and closed-source linking.

Limitations & Caveats

Functionality depends on Claude Code and its underlying AI models. .pdf/.docx support for PAPs/grants may have accessibility limitations. review-paper-code offers configurable review depth ('main' vs. 'full'). Auto-detection of main files may require manual path specification for non-standard project structures.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
0
Star History
84 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.