API server for serving LangChain apps/agents via FastAPI
Top 40.1% on sourcepulse
Langcorn provides a streamlined way to deploy LangChain LLM applications and agents as robust, high-performance APIs using FastAPI. It targets developers and MLOps engineers looking to easily serve and manage their language models, offering features like automatic API generation, authentication, and support for complex conversational memory.
How It Works
Langcorn acts as a wrapper around LangChain components, automatically generating RESTful API endpoints for defined chains, agents, or custom run functions. It leverages FastAPI's asynchronous capabilities for efficient request handling and supports passing LLM parameters and conversational history directly through API requests or headers, simplifying the integration of LangChain into larger applications.
Quick Start & Requirements
pip install langcorn
langcorn server examples.ex1:chain
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project appears to be actively developed, with examples for agents and conversational models, but specific details on production readiness or extensive community support are not detailed in the README. The primary dependency is LangChain, which itself has evolving APIs.
1 year ago
Inactive