pentest-ai-agents  by 0xSteph

AI agents for offensive security and penetration testing

Created 1 month ago
959 stars

Top 38.0% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides a suite of 28 specialized AI subagents designed to augment Claude Code into a powerful offensive security research assistant. Targeting penetration testers and security researchers, it automates complex tasks across the engagement lifecycle, from initial planning and reconnaissance to exploit research, detection engineering, and final reporting, thereby enhancing efficiency and depth of analysis.

How It Works

The project leverages Anthropic's Claude Code by organizing specialized AI agents as distinct files. Users interact with Claude, describing their security task, and Claude automatically routes the request to the appropriate agent based on its domain expertise. Agents operate in two tiers: Tier 1 provides advisory support, analyzing user-provided tool output and offering methodologies. Tier 2 agents, available for specific functions, can compose and execute commands directly within an authorized scope, with strict validation and user approval at each step.

Quick Start & Requirements

The primary installation involves a single command: curl -fsSL https://raw.githubusercontent.com/0xSteph/pentest-ai-agents/main/install.sh | bash. This script clones the repository and installs the agents to ~/.claude/agents/. Prerequisites include a configured Claude Code environment and an active Claude Pro or Max subscription. Detailed setup instructions, including first-time Claude Code configuration, are available in INSTALL.md.

Highlighted Details

  • 28 Specialized Agents: Covers diverse areas including engagement planning, recon, OSINT, exploit chaining, cloud security, API security, mobile, wireless, social engineering, vulnerability scanning, Active Directory, detection engineering, forensics, and reporting.
  • Tier 2 Execution: Select agents can execute commands directly, with a robust safety model that validates targets against the defined scope and requires user approval before each command runs.
  • Findings Database: Integrates with findings.sh for persistent SQLite storage of engagement data, tracking vulnerabilities, progress, and enabling exports across sessions.
  • Local Model Support: Agents are plain Markdown system prompts, convertible via ./opencode-setup.sh --full to work with local LLM providers like Ollama or LM Studio.

Maintenance & Community

The project is maintained by the author, 0xSteph, with contributions welcomed via pull requests. Specific community channels like Discord or Slack are not detailed in the README.

Licensing & Compatibility

The project is licensed under the MIT License, permitting commercial use and modification. Compatibility is primarily tied to the Anthropic Claude platform, though local model support offers broader integration potential.

Limitations & Caveats

This toolkit is strictly intended for authorized security testing engagements with signed rules of engagement and a clearly defined scope. Users must possess explicit authorization before utilizing these agents. The core functionality relies on Anthropic's Claude platform, representing a dependency on a third-party service.

Health Check
Last Commit

5 days ago

Responsiveness

Inactive

Pull Requests (30d)
2
Issues (30d)
0
Star History
764 stars in the last 30 days

Explore Similar Projects

Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
1 more.

cai by aliasrobotics

0.8%
8k
Cybersecurity AI (CAI) is an open framework for building AI-driven cybersecurity tools
Created 1 year ago
Updated 2 weeks ago
Feedback? Help us improve.