SDK for building explicit, transparent LLM apps
Top 49.7% on sourcepulse
LLMFlows is a Python framework designed for building explicit, transparent, and simple LLM applications like chatbots and agents. It targets developers who need fine-grained control over LLM interactions, offering a clear view into prompt construction, model calls, and data flow for easier monitoring and debugging.
How It Works
LLMFlows provides core abstractions for LLMs (wrapping APIs like OpenAI), PromptTemplates for dynamic prompt generation, and MessageHistory for chat contexts. Its key innovation lies in the Flow and FlowStep classes, which allow users to define complex, multi-step LLM workflows with explicit dependencies. The framework automatically manages execution order and supports asynchronous AsyncFlowStep
s for parallel processing of independent steps, optimizing performance.
Quick Start & Requirements
pip install llmflows
Highlighted Details
Flow
and FlowStep
classes.AsyncFlowStep
for parallel execution of independent workflow components.VectorStoreFlowStep
and supports callbacks for custom logic.Maintenance & Community
The project is actively maintained by Stoyan Stoyanov. Community engagement is encouraged via GitHub issues and social media (Threads, LinkedIn, Twitter).
Licensing & Compatibility
LLMFlows is released under the MIT license, permitting commercial use and integration with closed-source applications.
Limitations & Caveats
The framework primarily focuses on OpenAI and requires explicit configuration for other LLM providers. While it supports asynchronous operations, the core design emphasizes explicit, sequential control, which might add overhead for highly dynamic or unpredictable LLM interactions.
5 months ago
1 day