armorclaw  by armoriq

Secure AI agent operations

Created 2 months ago
266 stars

Top 96.0% on SourcePulse

GitHubView on GitHub
Project Summary

ArmorClaw is an intent-based security enforcement plugin for OpenClaw AI agents, designed to protect against prompt injection, data exfiltration, and unauthorized tool execution. It provides developers and power users with fine-grained control over AI agent actions, ensuring that tool usage aligns strictly with approved plans and policies, thereby enhancing the security and reliability of AI assistants.

How It Works

ArmorClaw operates in two main phases: Intent Planning and Tool Execution Enforcement. Upon receiving a user message, it intercepts the LLM input, parses available tools, and makes a separate LLM call to generate an explicit plan of allowed tool actions. This plan is sent to the ArmorClaw backend, which returns a cryptographically signed intent token. Before each tool execution, ArmorClaw verifies that the intended tool is part of the approved plan, checks the intent token's validity, applies local policy rules, and optionally verifies cryptographic proofs (CSRG Merkle tree proofs) for tamper-proof tracking. Its fail-closed architecture ensures that execution is blocked if any verification step fails.

Quick Start & Requirements

The recommended installation is a one-line script: curl -fsSL https://armoriq.ai/install-armorclaw.sh | bash. For OpenClaw 2026.3.x, openclaw plugins install @armoriq/armorclaw is also supported. Older versions (2026.2.x) require manual patching. Prerequisites include Node.js v22+, pnpm, Git, an ArmorClaw API key from claw.armoriq.ai, and an LLM provider key (OpenAI, Anthropic, Gemini, or OpenRouter). Configuration is managed via ~/.openclaw/openclaw.json.

Highlighted Details

  • Intent Verification: Ensures every tool execution is part of an explicitly approved plan.
  • Prompt Injection Protection: Actively blocks malicious instructions embedded within user inputs or files.
  • Data Exfiltration Prevention: Prevents unauthorized file uploads or data leaks by agents.
  • Cryptographic Verification: Supports optional CSRG Merkle tree proofs for tamper-proof intent tracking.
  • Fail-Closed Architecture: Blocks execution by default when intent cannot be verified.

Maintenance & Community

Developed by ArmorIQ, support is available via GitHub Issues at armoriq/armorclaw/issues and email at support@armoriq.ai. Links to official ArmorClaw/ArmorIQ Documentation and OpenClaw Documentation are provided.

Licensing & Compatibility

ArmorClaw is released under the MIT License, permitting broad use and modification. It functions as a plugin for the OpenClaw framework, with specific installation instructions provided for different OpenClaw versions.

Limitations & Caveats

Older OpenClaw versions (2026.2.x) necessitate manual runtime patching. LLM planner outputs, particularly from models like Gemini, may sometimes be wrapped in Markdown fences, requiring robust parsing strategies. Manual reinstallation can lead to "duplicate plugin id" errors if stale backup directories are not removed.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
7
Issues (30d)
0
Star History
73 stars in the last 30 days

Explore Similar Projects

Starred by Abubakar Abid Abubakar Abid(Cofounder of Gradio), Romain Huet Romain Huet(Head of Developer Experience at OpenAI), and
4 more.

NemoClaw by NVIDIA

1.0%
20k
Securely run always-on AI assistants
Created 1 month ago
Updated 19 hours ago
Feedback? Help us improve.