leaping  by leapingio

LLM-based debugger for Python tests

Created 1 year ago
262 stars

Top 97.3% on SourcePulse

GitHubView on GitHub
Project Summary

Leaping is a Python test debugger designed for developers to efficiently diagnose and fix failing tests. It leverages Large Language Models (LLMs) to enable natural language interaction for code debugging, allowing users to ask questions about execution flow and variable states.

How It Works

Leaping traces Python test execution, capturing variable changes and other non-deterministic events. This execution trace is then fed to an LLM (Ollama or GPT-4) which interprets natural language queries about the test's behavior, providing insights into why tests fail or how to correct them.

Quick Start & Requirements

  • Install via pip install leaping.
  • Requires an OPENAI_API_KEY environment variable for GPT-4 usage.
  • Run tests with pytest --leaping.
  • Supports Ollama and GPT-4 models.

Highlighted Details

  • Natural language debugging queries for test failures.
  • Retroactive inspection of program state.
  • Tracks variable changes and non-deterministic events.
  • Supports both Ollama and GPT-4 LLM backends.

Maintenance & Community

No specific community links or contributor information are provided in the README.

Licensing & Compatibility

The README does not specify a license.

Limitations & Caveats

Requires an API key for GPT-4, incurring potential costs. The project appears to be in early stages with limited information on stability or broader LLM support.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
0 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.