invariant  by invariantlabs-ai

Guardrails for agent security

Created 1 year ago
360 stars

Top 77.7% on SourcePulse

GitHubView on GitHub
Project Summary

Invariant Guardrails provides a rule-based layer for securing AI agent systems, particularly those powered by LLMs or MCPs. It acts as a proxy between applications and AI providers, enabling continuous monitoring and steering of agent behavior without invasive code modifications. The system is designed for AI developers and researchers seeking to prevent malicious agent actions and ensure robust system operation.

How It Works

Invariant Guardrails operates by intercepting and analyzing communication between an application and its AI backend. It uses a Python-inspired rule syntax to define conditions that trigger alerts or block actions. These rules can inspect message content, tool calls, and tool outputs, leveraging a standard library of operations for pattern matching and threat detection. The system integrates as a proxy or gateway, automatically evaluating rules before and after AI requests.

Quick Start & Requirements

  • Install: pip install invariant-ai
  • Prerequisites: Python 3.8+, an Invariant API key for cloud-based analysis (optional, local analysis is available).
  • Resources: Local analysis requires no external services. Cloud analysis requires an API key.
  • Documentation: Getting Started, Playground, Documentation

Highlighted Details

  • Rule syntax supports contextual analysis of agent traces, including tool calls and message content.
  • Offers both local execution via LocalPolicy and cloud-based analysis via the Invariant API.
  • Includes built-in detectors for common vulnerabilities like prompt injection.
  • Integrates with existing systems via a Gateway proxy.

Maintenance & Community

Invariant Guardrails is an open-source project by Invariant Labs. Contributions are welcomed via GitHub issues.

Licensing & Compatibility

The project is licensed under the Apache-2.0 license, which permits commercial use and linking with closed-source applications.

Limitations & Caveats

The effectiveness of guardrails is dependent on the quality and comprehensiveness of the defined rules. Advanced or novel attack vectors may require custom rule development.

Health Check
Last Commit

3 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
13 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems").

codegate by stacklok

0%
703
AI agent security and management tool
Created 11 months ago
Updated 5 months ago
Starred by Omar Sanseviero Omar Sanseviero(DevRel at Google DeepMind), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
18 more.

guardrails by guardrails-ai

0.8%
6k
Python framework for adding guardrails to LLMs
Created 2 years ago
Updated 4 days ago
Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
1 more.

cai by aliasrobotics

3.4%
5k
Cybersecurity AI (CAI) is an open framework for building AI-driven cybersecurity tools
Created 7 months ago
Updated 1 day ago
Feedback? Help us improve.