mellea  by generative-computing

Orchestrate complex AI workflows with generative programming

Created 5 months ago
271 stars

Top 95.1% on SourcePulse

GitHubView on GitHub
Project Summary

Generative programming replaces flaky agents and brittle prompts with structured, maintainable, robust, and efficient AI workflows. Mellea targets engineers and researchers seeking to systematically integrate LLMs into applications, offering structured control over AI outputs and enabling easier migration between models and providers.

How It Works

Mellea shifts AI development from prompt engineering to code-based generation. It provides a standard library of prompting patterns, sampling strategies for inference-time scaling, and a clean integration between verifiers and samplers. Key features include training custom verifiers (e.g., using activated LoRAs or proprietary classifier data) and compatibility with diverse inference services and model families. This approach allows users to control cost and quality by easily migrating workloads and integrate LLMs into legacy codebases via "mify" or define applications using "generative slots."

Quick Start & Requirements

Installation is straightforward via uv pip install mellea or pip install mellea. Optional dependencies for Huggingface (hf), watsonx (watsonx), Docling (docling), or all extras (all) can be installed. Python versions >= 3.13 may encounter outlines installation issues requiring a Rust compiler or a downgrade to Python 3.12. Intel Macs might face torch/torchvision conflicts, necessitating a Conda environment setup. Examples often require Ollama and specific models like IBM's Granite 4 Micro 3B. Colab notebooks are available for a guided experience.

Highlighted Details

  • Generative Slots: Define LLM-powered functions using the @generative decorator, allowing LLMs to fill in implementation details.
  • Instruct-Validate-Repair: A core pattern employing rejection sampling and verifiers (including LLM-as-a-judge) to enforce specific output constraints.
  • Legacy Integration ("mify"): Seamlessly integrate LLM capabilities into existing codebases.
  • Model Agnosticism: Easily lift and shift workloads between different inference providers, model families, and sizes.
  • m serve: Deploy generative programs as OpenAI-compatible model endpoints.

Maintenance & Community

Mellea originated from IBM Research. The README does not specify community channels (like Discord/Slack) or provide links to a roadmap.

Licensing & Compatibility

The project's license is not explicitly stated in the provided README, which requires clarification for adoption decisions, especially concerning commercial use or closed-source linking.

Limitations & Caveats

Users on Python >= 3.13 may face difficulties installing the outlines library due to Rust compiler issues, necessitating either installing Rust or downgrading Python. Intel Mac users might need to use Conda to manage torch and torchvision versions correctly.

Health Check
Last Commit

2 days ago

Responsiveness

Inactive

Pull Requests (30d)
23
Issues (30d)
39
Star History
13 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.