openllmetry  by traceloop

Open-source observability SDK for LLM applications

created 1 year ago
6,149 stars

Top 8.5% on sourcepulse

GitHubView on GitHub
Project Summary

OpenLLMetry provides open-source observability for LLM applications by extending OpenTelemetry. It offers instrumentations for popular LLM providers and vector databases, allowing users to integrate LLM-specific tracing into existing observability stacks like Datadog or Honeycomb. The project aims to simplify LLM observability for developers and researchers.

How It Works

OpenLLMetry leverages OpenTelemetry's robust framework, adding custom semantic conventions and instrumentations specifically for LLM interactions. This approach ensures compatibility with a wide range of observability backends that already support OpenTelemetry. The Traceloop SDK simplifies integration by providing a single point of initialization, automatically capturing LLM calls, vector database queries, and other relevant application traces.

Quick Start & Requirements

  • Install the SDK: pip install traceloop-sdk
  • Initialize in code:
    from traceloop.sdk import Traceloop
    Traceloop.init()
    
  • For immediate trace visibility locally, use Traceloop.init(disable_batch=True).
  • Supported destinations include Datadog, Honeycomb, Grafana, SigNoz, and many others. See docs for detailed connection instructions.

Highlighted Details

  • Instruments LLM providers: OpenAI, Anthropic, Cohere, HuggingFace, Bedrock, SageMaker, and more.
  • Instruments Vector DBs: Chroma, Pinecone, Qdrant, Weaviate, Milvus, and others.
  • Integrates with frameworks: LangChain, LlamaIndex, Haystack, LiteLLM, CrewAI.
  • Semantic conventions are now part of the official OpenTelemetry project.

Maintenance & Community

Licensing & Compatibility

  • Licensed under the Apache 2.0 license.
  • Compatible with commercial and closed-source applications.

Limitations & Caveats

The SDK includes an opt-in telemetry feature for collecting anonymous usage data to identify breaking changes in LLM provider APIs. Users can disable this via environment variables or initialization parameters.

Health Check
Last commit

10 hours ago

Responsiveness

1 day

Pull Requests (30d)
175
Issues (30d)
30
Star History
482 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Sourabh Bajaj Sourabh Bajaj(Cofounder of Uplimit), and
4 more.

opik by comet-ml

2.5%
12k
Open-source LLM evaluation framework for RAG, agents, and more
created 2 years ago
updated 19 hours ago
Feedback? Help us improve.