HGM  by metauto-ai

Self-improving coding agents for human-level development

Created 1 month ago
303 stars

Top 88.2% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

Huxley-Gödel Machine (HGM) is an open-source project developing coding agents that approximate a theoretical optimal self-improving machine. It targets AI researchers and developers seeking to build autonomous, evolving coding systems, offering a practical approach to self-modifying AI.

How It Works

HGM implements self-improving coding agents that iteratively rewrite their own code. The core mechanism involves estimating the "promise" of entire subtrees (clades) of potential modifications. This allows the agents to intelligently decide which self-improvement pathways to explore and expand, making the abstract Gödel Machine concept actionable.

Quick Start & Requirements

Setup requires configuring an OpenAI API key (OPENAI_API_KEY in ~/.bashrc) and ensuring Docker is functional. Dependencies are managed via Conda (conda create -n hgm, pip install -r requirements.txt). The project integrates with SWE-bench, necessitating cloning it, checking out a specific commit (dc4c087c2b9e4cefebf2e3d201d27e36), and installing it (pip install -e .). Polyglot dataset preparation involves python -m polyglot.prepare_polyglot_dataset, requiring configured Git. Execution is initiated via ./run.sh.

Highlighted Details

  • Implements an approximation of the theoretical Gödel Machine for AI self-improvement.
  • Features coding agents capable of iterative self-rewriting.
  • Utilizes clade-based promise estimation to guide self-modification decisions.
  • Builds upon the Darwin-Gödel Machine codebase.
  • Leverages SWE-bench and polyglot-benchmark for evaluation frameworks.

Maintenance & Community

No specific details regarding maintainers, community channels (e.g., Discord, Slack), or project roadmaps were present in the provided README snippet.

Licensing & Compatibility

The licensing terms for this repository were not specified in the provided README content.

Limitations & Caveats

A significant safety consideration involves the execution of untrusted, model-generated code. While current settings aim to mitigate risks, the code may still exhibit destructive behavior due to inherent model limitations or alignment issues. Users must acknowledge and accept these risks upon usage. The setup also involves several specific steps and dependencies, including a particular Git commit for SWE-bench.

Health Check
Last Commit

6 days ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
94 stars in the last 30 days

Explore Similar Projects

Starred by Zhiqiang Xie Zhiqiang Xie(Coauthor of SGLang), Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research), and
3 more.

Trace by microsoft

0.6%
674
AutoDiff-like tool for end-to-end AI agent training with general feedback
Created 1 year ago
Updated 3 months ago
Feedback? Help us improve.