REST API deployment for LangChain runnables and chains
Top 21.5% on sourcepulse
LangServe provides a framework for deploying LangChain runnables and chains as REST APIs, targeting developers building LLM applications. It simplifies the process of exposing complex LLM logic as scalable web services, offering features like automatic schema generation, interactive playgrounds, and built-in tracing.
How It Works
LangServe leverages FastAPI and Pydantic to create robust APIs. It automatically infers input and output schemas from LangChain objects, enforcing data validation and providing clear error messages. The library exposes standard endpoints (/invoke
, /batch
, /stream
, /stream_log
) for interacting with deployed runnables, along with a /playground
for interactive testing and debugging. It's built on efficient asynchronous Python libraries like uvloop
and asyncio
.
Quick Start & Requirements
pip install "langserve[all]"
.pip install -U langchain-cli
) is recommended for project bootstrapping.Highlighted Details
Maintenance & Community
LangChain AI maintains the project. The README notes a recommendation to use LangGraph Platform for new projects and that LangServe will only accept bug fixes, not new features.
Licensing & Compatibility
LangServe is typically distributed under a permissive license (e.g., MIT), allowing for commercial use and integration with closed-source applications.
Limitations & Caveats
The project strongly recommends migrating to LangGraph Platform for new projects. Older versions (<= 0.2.0) have compatibility issues with Pydantic v2 regarding OpenAPI documentation generation. Client callbacks for server-originated events are not yet supported. File uploads via multipart/form-data
are not yet supported; base64 encoding is the current workaround.
3 weeks ago
1 day