claude-code-my-workflow  by pedrohcgs

Agentic system for academic content creation and research

Created 1 month ago
615 stars

Top 53.5% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides a ready-to-fork Claude Code template designed for academics, streamlining workflows involving LaTeX/Beamer and R/Quarto. It automates complex tasks such as lecture slide creation, R script development, and Beamer-to-Quarto conversions through a multi-agent system, offering a structured, rigorous, and efficient approach to academic content production and replication.

How It Works

The core of the workflow is "Contractor Mode," where Claude Code autonomously plans, implements, reviews, and verifies tasks. This is facilitated by specialized agents, each focusing on a specific quality dimension like grammar, visual layout, R code correctness, or domain-specific accuracy. The system employs adversarial QA, using critic and fixer agents in a loop to catch subtle errors, and enforces quality gates with scoring thresholds (80/90/95) to ensure outputs meet predefined standards before committing or creating pull requests. Context survival mechanisms, including hooks and persistent memory files, ensure continuity and learning across sessions.

Quick Start & Requirements

  • Primary Install: npm install -g @anthropic-ai/claude-code (Claude Code is the only hard requirement).
  • Non-default Prerequisites: XeLaTeX (TeX Live or MacTeX), Quarto, R, pdf2svg (macOS), and gh CLI (macOS) are recommended for specific functionalities.
  • Setup: A "Quick Start (5 minutes)" is described, involving forking the repo, cloning it, and then using a specific prompt with Claude Code to configure the project.
  • Links:
    • Live site: psantanna.com/claude-code-my-workflow
    • Quarto Docs: quarto.org/docs/get-started
    • R Project: r-project.org

Highlighted Details

  • Features 10 specialized agents (e.g., proofreader, slide-auditor, r-reviewer, domain-reviewer) and 21 skills (e.g., /create-lecture, /translate-to-quarto, /data-analysis).
  • Implements adversarial QA via critic/fixer loops and quality gates for automated review and scoring.
  • Supports context survival across sessions using hooks and a MEMORY.md file for accumulated learning.
  • Includes a research workflow with an explorations/ folder and a fast-track option using a lower quality threshold (60/100).

Maintenance & Community

This project is described as a "work in progress" and a personal setup shared with colleagues, indicating ongoing development. It was extracted from a PhD course by Pedro Sant'Anna. No specific community channels (like Discord or Slack) are listed, but the project is hosted on GitHub.

Licensing & Compatibility

The project is released under the MIT License, permitting free use for teaching, research, and academic purposes. Compatibility for commercial use should be assessed against the standard MIT license terms.

Limitations & Caveats

The repository is explicitly stated as a "work in progress" and not a polished guide for general use, requiring customization for specific academic domains (e.g., configuring the domain-reviewer and knowledge-base-template). The effectiveness of the workflow is dependent on the Claude Code model's capabilities and the quality of user prompts.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
15
Issues (30d)
3
Star History
418 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.