Discover and explore top open-source AI tools and projects—updated daily.
peteromalletAgent harness for engineering beautiful codebases
Top 21.0% on SourcePulse
<2-3 sentences summarising what the project addresses and solves, the target audience, and the benefit.> Desloppify is an agent harness that empowers AI coding agents to systematically enhance codebase quality. It addresses structural code "rot" beyond mechanical defects by identifying and improving abstraction quality, naming, module boundaries, and error handling. It targets developers and AI agents seeking high code maintainability, providing a quantifiable health score.
How It Works
The system uses dual analysis: "Subjective" LLM evaluation assesses code quality (abstraction, naming, error handling), while "Mechanical" analysis detects common issues (unused imports, dead code, complexity). Findings are prioritized, auto-fixed where possible, and presented for judgment. Desloppify maintains persistent state, uses strict scoring with attestation, and cross-checks assessments to prevent gaming, aiming for a score signifying "beautiful" code.
Quick Start & Requirements
pip install --upgrade desloppify. For enhanced coverage: pip install --upgrade "desloppify[full]".bandit (Python security) and tree-sitter (AST analysis). Integration with an AI coding agent is essential for full functionality.git push.git clone https://github.com/peteromallet/desloppify.git.Highlighted Details
Maintenance & Community
The project encourages community involvement, suggesting users join "vibe engineers" and log issues. The primary development hub is the GitHub repository. Specific details on contributors, sponsorships, or a formal roadmap are absent from the provided README.
Licensing & Compatibility
The provided README does not specify the project's license. This omission requires further investigation for commercial use or closed-source integration compatibility.
Limitations & Caveats
Desloppify is presented as an evolving system, with the goal of defining "good" code quality still under development. Its effectiveness is contingent on the capabilities of the integrated AI coding agent. Importing LLM-generated findings requires strict schema adherence, and invalid findings can cause import failures, potentially introducing friction into the review process.
1 day ago
Inactive
the-crypt-keeper
huybery
LiveCodeBench
Codium-ai
microsoft