evo  by evo-hq

Code optimization via autonomous experimentation

Created 2 weeks ago

New!

504 stars

Top 61.8% on SourcePulse

GitHubView on GitHub
Project Summary

Evo is an open-source plugin for Claude Code and Codex designed to automate code optimization. It transforms a codebase into an autoresearch loop, enabling developers to discover relevant metrics, instrument benchmarks, and iteratively improve code performance through parallel experimentation. It targets developers seeking automated performance tuning.

How It Works

Evo uses a novel tree search over greedy hill-climbing, enabling broader optimization exploration. It spawns multiple parallel subagents, each operating within its own git worktree, to concurrently test hypotheses and iterate on code improvements. A shared state mechanism ensures all agents benefit from collective learnings, while a gating system can automatically discard experiments failing predefined regression tests or safety checks. This parallel approach accelerates discovery of optimal code configurations.

Quick Start & Requirements

  • Prerequisites: Python 3.12+, git, and uv package manager.
  • Installation:
    • Claude Code: Add plugin via marketplace: /plugin install evo-hq/evo.
    • Codex: Install CLI globally (uv tool install evo-hq-cli or pipx install evo-hq-cli), then add plugin via marketplace (codex marketplace add evo-hq/evo). Requires Codex 0.121.0-alpha.2+.
  • Usage: Initiate discovery with /evo:discover or $evo discover. Launch optimization with /evo:optimize or $evo optimize, configurable via subagents, budget, and stall parameters.
  • Dashboard: An integrated dashboard automatically launches during evo:discover or evo init, providing live monitoring via a local URL (e.g., http://127.0.0.1:8080).
  • Developer Install: Clone the repository and use uv run --project /path/to/evo evo status.

Highlighted Details

  • Automated benchmark discovery and instrumentation.
  • Tree search exploration strategy for optimization.
  • Parallel execution of multiple semi-autonomous subagents.
  • Shared state and gating for robust, collaborative experimentation.
  • Live dashboard for experiment monitoring.

Maintenance & Community

No specific details regarding maintainers, community channels (e.g., Discord, Slack), or a public roadmap are provided in the README.

Licensing & Compatibility

  • License: Licensed under the Apache License 2.0.
  • Compatibility: The Apache 2.0 license is permissive, generally allowing for commercial use and integration within closed-source projects.

Limitations & Caveats

Distributed evaluation via Harbor is a listed TODO, implying current benchmarks run locally. The effectiveness is dependent on the underlying LLM's ability to generate relevant hypotheses and code modifications.

Health Check
Last Commit

18 hours ago

Responsiveness

Inactive

Pull Requests (30d)
6
Issues (30d)
9
Star History
504 stars in the last 16 days

Explore Similar Projects

Starred by Zhiqiang Xie Zhiqiang Xie(Coauthor of SGLang), Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research), and
3 more.

Trace by microsoft

0%
730
AutoDiff-like tool for end-to-end AI agent training with general feedback
Created 1 year ago
Updated 4 months ago
Starred by Li Jiang Li Jiang(Coauthor of AutoGen; Engineer at Microsoft) and Joe Walnes Joe Walnes(Head of Experimental Projects at Stripe).

autoresearch by uditgoenka

4.9%
4k
Autonomous iteration engine for Claude Code
Created 1 month ago
Updated 6 days ago
Starred by Shizhe Diao Shizhe Diao(Author of LMFlow; Research Scientist at NVIDIA), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
6 more.

openevolve by algorithmicsuperintelligence

1.6%
6k
Coding agent for scientific/algorithmic discovery, based on AlphaEvolve paper
Created 11 months ago
Updated 1 month ago
Feedback? Help us improve.