Discover and explore top open-source AI tools and projects—updated daily.
Justin0504Runtime firewall and audit layer for AI agents
Top 80.9% on SourcePulse
Aegis provides a crucial security layer for AI agents by acting as a pre-execution firewall. It intercepts every tool call, classifying its intent, enforcing defined policies, and logging actions in a tamper-evident audit trail. This system is designed for developers and organizations deploying AI agents who need to prevent costly or dangerous actions like data exfiltration, SQL injection, or unauthorized command execution, offering enhanced control and security without requiring modifications to existing agent code.
How It Works
Aegis operates by sitting between an AI agent and its available tools. Upon an agent's tool invocation, Aegis intercepts the call, performing real-time analysis: classifying the tool's purpose (e.g., database, file system, network), detecting behavioral anomalies, and evaluating against security policies for risks such as injection or data leakage. Based on this evaluation, Aegis can either allow the tool to execute, block it, or pause the execution for explicit human approval via its Compliance Cockpit. All actions are recorded in a cryptographically secured, hash-chained audit trail.
Quick Start & Requirements
docker compose up -d after cloning the repository (git clone https://github.com/Justin0504/Aegis).localhost:3000), Gateway API (localhost:8080), PyPI (agentguard-aegis), npm (@justinnn/agentguard), Docker Hub (aegis-gateway), arXiv (2603.12621), Live Demo Agent (localhost:8501 with prerequisites).Highlighted Details
Maintenance & Community
The project is actively maintained by its creator, Justin, with contributions welcomed via issues and pull requests on GitHub. Specific community channels like Discord or Slack are not detailed in the README.
Licensing & Compatibility
Aegis is released under the MIT License, permitting self-hosting and commercial use without significant restrictions.
Limitations & Caveats
Anomaly detection requires an initial learning period (approximately 200 traces) before full effectiveness, meaning new agents are not immediately protected against behavioral anomalies. The system's efficacy relies on accurate tool classification, and the live demo requires an Anthropic API key.
2 days ago
Inactive