turnstone  by turnstonelabs

Orchestrate multi-node AI agents with tools

Created 1 month ago
304 stars

Top 88.0% on SourcePulse

GitHubView on GitHub
Project Summary

Summary

Turnstone is a multi-node AI orchestration platform designed to deploy and manage tool-using AI agents across server clusters. It enables LLMs to interact with tools like shell commands, files, and web services, facilitating complex, multi-turn investigations and actions. The platform targets engineers and power users seeking robust AI agent deployment with advanced governance and scalability.

How It Works

Turnstone orchestrates LLM interactions with a suite of tools (shell, files, search, web, planning) through a configurable pipeline. It supports interactive CLI/browser interfaces, queue-driven agent execution via message queues, and multi-node cluster deployments with load balancing. A key differentiator is its "Intent Validation" system, where an LLM judge assesses tool call risks before execution, providing users with evidence-based recommendations. Governance features like RBAC, OIDC SSO, tool policies, and audit logs are integrated for enterprise deployments.

Quick Start & Requirements

  • Installation: pip install turnstone for interactive use. pip install turnstone[mq] for queue-driven agents. pip install turnstone[console] for the dashboard. Docker Compose is also supported.
  • Prerequisites: Python 3.11+, an OpenAI-compatible API endpoint (e.g., vLLM, llama.cpp) or Anthropic API key, and Redis for message queue functionality. Git LFS is required for cloning diagrams.
  • Documentation: Official quick-start guides, architecture diagrams, and detailed configuration options are available within the docs/ directory.

Highlighted Details

  • Multi-Node Orchestration: Distributes AI agent workloads across a cluster with real-time monitoring via a web dashboard.
  • Intent Validation: LLM-powered judge provides risk assessments and recommendations for tool calls, enhancing safety.
  • Enterprise Governance: Features RBAC, OIDC SSO integration (Okta, Azure AD, Google, Keycloak), tool policies, skills with security scanning, and append-only audit logs.
  • Cluster Simulator: Allows testing the stack at scale (up to 1000 nodes) without requiring an LLM backend.
  • Model Context Protocol (MCP): Enables seamless integration of external tool servers.
  • Multi-Provider Support: Works with various OpenAI-compatible APIs and Anthropic's native Messages API, supporting multiple models per instance.
  • Extensive Metrics: Exposes Prometheus-formatted metrics for detailed system and workstream monitoring.

Maintenance & Community

No specific details regarding maintainers, community channels (e.g., Discord, Slack), or recent activity were found in the provided README.

Licensing & Compatibility

The project is licensed under the Business Source License 1.1 (BSL 1.1). This license permits free use for most purposes, including development and internal deployment, but restricts offering Turnstone as a managed service until March 1, 2030, after which it converts to Apache 2.0.

Limitations & Caveats

The BSL 1.1 license imposes significant restrictions on commercial SaaS offerings until 2030. The platform relies on external LLM providers and Redis for core multi-node and queue-driven functionalities, which must be separately managed. Git LFS is a requirement for accessing certain documentation assets.

Health Check
Last Commit

3 days ago

Responsiveness

Inactive

Pull Requests (30d)
276
Issues (30d)
11
Star History
160 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.