LLM observability platform for monitoring, evaluating, and experimenting
Top 11.7% on sourcepulse
Helicone provides an open-source platform for LLM observability, enabling developers to monitor, evaluate, and experiment with their AI applications. It targets developers building with large language models, offering a unified solution to track costs, latency, and quality, manage prompts, and integrate with various LLM providers and frameworks.
How It Works
Helicone acts as a proxy and logging layer, intercepting LLM requests. It supports numerous integrations via a simple code modification or header addition, routing requests through its platform. The core architecture includes a web frontend, a Cloudflare Worker for proxy logging, a dedicated server (Jawn) for log collection, Supabase for application data, ClickHouse for analytics, and Minio for object storage. This distributed setup allows for scalable data ingestion and analysis.
Quick Start & Requirements
docker compose up
after cloning the repository and configuring .env
. Cloud offering available with a free tier.Highlighted Details
Maintenance & Community
The project is actively maintained, with a public roadmap and a Discord community for support and contributions. Contributions for documentation, integrations, and feature requests are welcomed.
Licensing & Compatibility
Licensed under the Apache v2.0 License, permitting commercial use and integration with closed-source applications.
Limitations & Caveats
Manual self-hosting is explicitly not recommended. The README mentions potential out-of-date integration lists, suggesting direct contact for missing providers.
12 hours ago
1 week