slopc  by shorwood

AI-powered compile-time code generation for Rust

Created 3 weeks ago

New!

303 stars

Top 88.1% on SourcePulse

GitHubView on GitHub
Project Summary

A Rust procedural macro, shorwood/slopc, automates function body generation at compile time using large language models (LLMs). It targets developers seeking to delegate implementation details to AI, aiming to accelerate development by synthesizing code based on function signatures and documentation.

How It Works

The #[slop] macro captures function signatures, doc comments, and context from Cargo.toml, feeding this information to a configured LLM API. It then attempts to generate a valid function body. If compilation fails, slopc feeds the rustc errors back to the LLM for iterative refinement and retries generation up to a specified limit. Generated code can be cached to reduce LLM API costs and build times.

Quick Start & Requirements

  • Installation: Add slopc as a dependency in your Cargo.toml.
  • Prerequisites: Requires an API key for a supported LLM provider (e.g., OpenRouter, Mistral AI), configured via environment variables (e.g., OPEN_ROUTER_API_KEY, MISTRAL_API_KEY) or a slop.toml file.
  • Usage: Apply the #[slop] attribute to functions with todo!() bodies.
  • Configuration: Options include LLM model, provider endpoint, retry count, caching behavior (nocache), doctest execution (run_doctests), and providing additional context files.
  • Links: Example usage and configuration patterns are detailed in the README.

Highlighted Details

  • Compile-time AI Code Generation: Automates function implementation using LLMs.
  • Iterative Compilation & Feedback Loop: Retries code generation based on rustc compiler errors.
  • Configurable LLM Integration: Supports various models and providers via attributes, environment variables, or slop.toml.
  • Build-time Doctest Execution: Optionally compiles and runs embedded documentation tests as assertions.
  • Caching: Stores generated code in target/slop-cache/ to reduce LLM API costs and build times.

Maintenance & Community

No specific details regarding maintainers, community channels (Discord/Slack), or roadmaps are provided in the README.

Licensing & Compatibility

  • License: AGPL-3.0-only. This is a strong copyleft license requiring derivative works to also be open-sourced under the same license.
  • Compatibility: The AGPL-3.0 license may impose restrictions on linking with closed-source or proprietary software. The authors suggest forking and relicensing under MIT for broader compatibility.

Limitations & Caveats

The project's AGPL-3.0-only license presents significant copyleft obligations. Its reliance on external LLM APIs incurs costs and potential unreliability. The satirical tone of the README suggests the project may be experimental or intended for niche use cases rather than robust production environments. Generation success is not guaranteed, and the system may "give up" after retries.

Health Check
Last Commit

3 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
1
Issues (30d)
3
Star History
303 stars in the last 23 days

Explore Similar Projects

Feedback? Help us improve.