Observal  by BlazeUp-AI

AI agent registry and observability platform

Created 4 weeks ago

New!

778 stars

Top 44.5% on SourcePulse

GitHubView on GitHub
Project Summary

Observal provides a self-hosted registry and observability platform for AI coding agents, addressing the challenges of sharing and measuring agent performance. It targets developers building and deploying AI agents, offering a standardized way to package, distribute, and monitor agent components, thereby improving discoverability and enabling performance analysis.

How It Works

Observal functions as a centralized registry for AI agents, where each agent is defined by a portable YAML configuration that bundles essential components like MCP servers, skills, hooks, prompts, and sandboxes. When an agent is pulled, a transparent shim intercepts its interactions, generating telemetry data (traces, spans, sessions) that streams into a ClickHouse database. An integrated evaluation engine then analyzes these sessions post-hoc, scoring agent performance across key dimensions using LLMs. This approach provides end-to-end observability and a framework for objective performance measurement.

Quick Start & Requirements

  • Primary install/run command: Clone the repository, copy .env.example to .env, start services with docker compose up --build -d, and install the CLI with uv tool install --editable ..
  • Prerequisites: Docker is required for running the 7 core services (API, web UI, PostgreSQL, ClickHouse, Redis, background worker, OpenTelemetry collector).
  • Resource footprint: Local development requires Docker to manage multiple services.
  • Relevant pages: SETUP.md for detailed setup, CONTRIBUTING.md, SECURITY.md.

Highlighted Details

  • Supported Tools: Fully supports Claude Code, Codex CLI, Gemini CLI, and Kiro CLI, with MCP/rules file support for Cursor and VS Code.
  • Component Types: Manages six component types: Agents, MCP Servers, Skills, Hooks, Prompts, and Sandboxes.
  • Observability Pipeline: Utilizes OpenTelemetry Collector to ingest traces, spans, and sessions into ClickHouse for real-time metrics and traceability.
  • Eval Engine: Scores agent sessions on Goal Completion, Tool Call Efficiency, Tool Call Failures, Factual Grounding, and Thought Process.

Maintenance & Community

The project encourages community engagement through GitHub Discussions for questions and Issues for bug reports and feature requests. Development is motivated by community stars and contributions.

Licensing & Compatibility

Observal is licensed under the Apache License 2.0, which permits commercial use and integration into closed-source projects.

Limitations & Caveats

The evaluation engine is marked as "WIP" (Work In Progress). The platform is self-hosted, requiring users to manage the Docker environment and its underlying services.

Health Check
Last Commit

18 hours ago

Responsiveness

Inactive

Pull Requests (30d)
344
Issues (30d)
253
Star History
781 stars in the last 28 days

Explore Similar Projects

Starred by Guy Podjarny Guy Podjarny(Founder of Tessl; Cofounder of Snyk; Ex-CTO of Akamai), Gabriel Almeida Gabriel Almeida(Cofounder of Langflow), and
12 more.

awesome-ai-agents by e2b-dev

0.4%
28k
AI agent list, open & closed source, for various uses
Created 2 years ago
Updated 1 year ago
Starred by Lilian Weng Lilian Weng(Cofounder of Thinking Machines Lab), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
59 more.

AutoGPT by Significant-Gravitas

0.1%
184k
AI agent platform for building, deploying, and running autonomous workflows
Created 3 years ago
Updated 7 hours ago
Feedback? Help us improve.