Discover and explore top open-source AI tools and projects—updated daily.
Automated scientific code discovery and optimization
New!
Top 59.1% on SourcePulse
<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> ShinkaEvolve is a framework that merges Large Language Models (LLMs) with evolutionary algorithms to automate scientific discovery and code optimization. It is designed for researchers and engineers tackling scientific problems where performance metrics need to be improved while ensuring code correctness and readability, offering an automated pathway to explore and enhance scientific code.
How It Works
The core of ShinkaEvolve is an evolutionary process where a population of programs evolves across generations. LLMs function as intelligent mutation operators, proposing code enhancements. This approach, inspired by AI Scientist and Darwin Goedel Machine concepts, leverages LLM creativity and evolutionary search for automated exploration. The framework supports parallel evaluation of candidate solutions and facilitates knowledge transfer between distinct evolutionary "islands" through a shared archive of successful programs.
Quick Start & Requirements
To begin, clone the repository, install the uv
package manager, create a Python 3.11 virtual environment, and install ShinkaEvolve using uv pip install -e .
. The primary command to launch an experiment is shinka_launch variant=circle_packing_example
. Essential components include a user-defined evaluate.py
script for scoring and validation, and an initial.py
script serving as the initial solution for evolution.
Highlighted Details
Maintenance & Community
The README mentions related open-source projects (OpenEvolve, LLM4AD) and provides citation details for an arXiv preprint. However, it does not specify community channels (e.g., Discord, Slack), active contributors, or a public roadmap.
Licensing & Compatibility
The provided README does not specify the software license, nor does it offer details regarding compatibility for commercial use or integration with closed-source projects.
Limitations & Caveats
The framework's efficacy is contingent on the presence of a robust verifier for scientific tasks and the quality of the LLMs employed for code generation. The "open-ended" nature of the evolution may lead to unpredictable results, requiring careful task definition and monitoring.
3 hours ago
Inactive