Discover and explore top open-source AI tools and projects—updated daily.
LLM-based debugger for Python tests
Top 97.3% on SourcePulse
Leaping is a Python test debugger designed for developers to efficiently diagnose and fix failing tests. It leverages Large Language Models (LLMs) to enable natural language interaction for code debugging, allowing users to ask questions about execution flow and variable states.
How It Works
Leaping traces Python test execution, capturing variable changes and other non-deterministic events. This execution trace is then fed to an LLM (Ollama or GPT-4) which interprets natural language queries about the test's behavior, providing insights into why tests fail or how to correct them.
Quick Start & Requirements
pip install leaping
.OPENAI_API_KEY
environment variable for GPT-4 usage.pytest --leaping
.Highlighted Details
Maintenance & Community
No specific community links or contributor information are provided in the README.
Licensing & Compatibility
The README does not specify a license.
Limitations & Caveats
Requires an API key for GPT-4, incurring potential costs. The project appears to be in early stages with limited information on stability or broader LLM support.
1 year ago
Inactive