llmflows  by stoyan-stoyanov

SDK for building explicit, transparent LLM apps

Created 2 years ago
701 stars

Top 48.7% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

LLMFlows is a Python framework designed for building explicit, transparent, and simple LLM applications like chatbots and agents. It targets developers who need fine-grained control over LLM interactions, offering a clear view into prompt construction, model calls, and data flow for easier monitoring and debugging.

How It Works

LLMFlows provides core abstractions for LLMs (wrapping APIs like OpenAI), PromptTemplates for dynamic prompt generation, and MessageHistory for chat contexts. Its key innovation lies in the Flow and FlowStep classes, which allow users to define complex, multi-step LLM workflows with explicit dependencies. The framework automatically manages execution order and supports asynchronous AsyncFlowSteps for parallel processing of independent steps, optimizing performance.

Quick Start & Requirements

  • Install via pip: pip install llmflows
  • Requires Python 3.7+ and an API key for LLM providers (e.g., OpenAI).
  • Official documentation: https://llmflows.readthedocs.io
  • Live demo and examples available in the repository.

Highlighted Details

  • Supports explicit LLM calls, prompt templating, and chat history management.
  • Enables building complex, dependency-aware workflows using Flow and FlowStep classes.
  • Offers AsyncFlowStep for parallel execution of independent workflow components.
  • Integrates with vector stores (e.g., Pinecone) via VectorStoreFlowStep and supports callbacks for custom logic.

Maintenance & Community

The project is actively maintained by Stoyan Stoyanov. Community engagement is encouraged via GitHub issues and social media (Threads, LinkedIn, Twitter).

Licensing & Compatibility

LLMFlows is released under the MIT license, permitting commercial use and integration with closed-source applications.

Limitations & Caveats

The framework primarily focuses on OpenAI and requires explicit configuration for other LLM providers. While it supports asynchronous operations, the core design emphasizes explicit, sequential control, which might add overhead for highly dynamic or unpredictable LLM interactions.

Health Check
Last Commit

7 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.