RL for software evolution
Top 57.2% on sourcepulse
This repository provides the official codebase for SWE-RL, a framework for advancing Large Language Model (LLM) reasoning in software engineering tasks by leveraging reinforcement learning on open-source software evolution data. It is designed for researchers and practitioners in AI for software engineering.
How It Works
SWE-RL utilizes a reinforcement learning approach, training LLMs to perform software modifications based on feedback derived from code changes. It employs sequence similarity metrics for reward calculation, comparing predicted edits against oracle changes. The system supports various editing formats, including search/replace and unified diffs, offering flexibility in how code evolution is evaluated.
Quick Start & Requirements
git clone https://github.com/facebookresearch/swe-rl && cd swe-rl
followed by pip install -e ".[dev]"
.pytest
.src/swerl/core/prompts.py
and src/swerl/core/reward.py
.Highlighted Details
calculate_search_replace_reward
, calculate_reward_unidiff
, and a general calculate_reward
.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
4 months ago
1 day