PrivHunterAI  by Ed1s0nZ

CLI tool for detecting privilege escalation vulnerabilities using AI

created 6 months ago
323 stars

Top 85.3% on sourcepulse

GitHubView on GitHub
Project Summary

PrivHunterAI is a passive proxy tool designed to detect authorization vulnerabilities by leveraging large language models (LLMs) like Kimi, DeepSeek, and GPT. It targets security researchers and developers by automating the comparison of HTTP requests made with different user credentials to identify privilege escalation flaws. The tool aims to provide systematic analysis and reporting of potential authorization bypasses.

How It Works

PrivHunterAI operates by intercepting HTTP traffic and comparing pairs of requests. It first preprocesses requests to identify operation types (read/write), public interfaces, dynamic fields, and identity fields. The core logic then compares responses from two different user contexts (A and B). It prioritizes quick detection based on response status codes, content consistency, and the presence of sensitive data. If quick detection fails, it enters a deep analysis mode, comparing field structures, values, and semantic content to determine authorization status.

Quick Start & Requirements

  • Installation: Download source or releases, compile with go build, or run pre-compiled binaries.
  • Configuration: Edit config.json to set the AI model (e.g., kimi, gpt) and corresponding API keys. Configure headers2 for request B.
  • Proxy Setup: Configure BurpSuite or other proxies to use 127.0.0.1:9080 (configurable).
  • HTTPS Interception: Install the MITM proxy certificate (~/.mitmproxy/mitmproxy-ca-cert.pem) to parse HTTPS traffic.
  • Web UI: Access results at 127.0.0.1:8222.

Highlighted Details

  • Leverages multiple LLMs for vulnerability analysis.
  • Supports passive proxying for traffic interception.
  • Includes a web interface for viewing scan results with pagination.
  • Implements a retry mechanism for scan failures and API errors.
  • Features cost optimization by pre-filtering responses with common authorization failure keywords.

Maintenance & Community

The project is actively maintained with recent updates focusing on retry mechanisms, response filtering, URL analysis, front-end improvements, and prompt optimization for reduced false positives and cost savings. Community interaction channels are not explicitly mentioned in the README.

Licensing & Compatibility

The README does not specify a license. The project is intended for technical exchange and explicitly warns against illegal use.

Limitations & Caveats

The tool's effectiveness may depend on the LLM's interpretation of HTTP semantics and the quality of the provided API keys. The README notes that "unknown" results are returned when similarity is between 50%-80% or responses are malformed, indicating potential limitations in definitive analysis.

Health Check
Last commit

1 month ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
25 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Michele Castata Michele Castata(President of Replit), and
2 more.

rebuff by protectai

0.4%
1k
SDK for LLM prompt injection detection
created 2 years ago
updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
2 more.

llm-security by greshake

0.1%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
created 2 years ago
updated 2 weeks ago
Feedback? Help us improve.