Discover and explore top open-source AI tools and projects—updated daily.
jd-opensourceLLM-powered agent for automated software repair
New!
Top 98.4% on SourcePulse
JoyCode Agent addresses the challenge of automated software repair for real-world open-source issues. It provides an LLM-powered pipeline for generating patches, creating and verifying tests, and intelligently retrying fixes. This system offers a high-performance, cost-efficient solution for developers and researchers seeking to automate software maintenance tasks, achieving state-of-the-art results on the SWE-bench dataset.
How It Works
This project employs an end-to-end LLM-driven pipeline for robust code repair. Its core innovation lies in patch-test co-generation, where tests are automatically created alongside patches, enabling comprehensive validation through a closed-loop "Generate → Validate → Refine" cycle. Intelligent failure attribution and targeted retry strategies, powered by a multi-agent architecture (including specialized Testing, Patch, CSR, and Decision agents), allow for precise root cause analysis and optimized repair attempts, mimicking human developer workflows.
Quick Start & Requirements
Installation involves cloning the repository, creating a Conda environment with Python 3.11, and installing dependencies via pip install -r requirements.txt. Key prerequisites include Docker with access to docker.1ms.run and API keys for LLM services (e.g., OpenAI, Anthropic). Users must configure LLM details in llm_server/model_config.json and specify target instances in instance_id.txt. The primary execution command is python run_patch_pipeline.py.
Highlighted Details
Maintenance & Community
The provided README does not contain specific details regarding community channels (e.g., Discord, Slack), active contributors, sponsorships, or a public roadmap.
Licensing & Compatibility
The project is licensed under the permissive MIT License, allowing for broad compatibility, including commercial use and integration into closed-source projects without copyleft restrictions.
Limitations & Caveats
Successful operation is contingent on correctly configured LLM API access and a functional Docker environment capable of pulling images from docker.1ms.run. Performance is benchmarked on the SWE-bench dataset, and real-world effectiveness may vary. The system's complexity, involving multiple agents and containerized environments, may require significant technical expertise for setup and troubleshooting.
3 weeks ago
Inactive