Prompt injection scanner for LLM apps
Top 41.9% on sourcepulse
Promptmap is a vulnerability scanning tool designed to automatically test custom LLM applications for prompt injection attacks. It assists developers and security professionals in identifying and mitigating risks like system prompt leakage or functional distraction within their LLM-based systems.
How It Works
Promptmap operates as a dynamic analysis tool, akin to SAST and DAST in traditional security. It analyzes provided system prompts, executes them against target LLMs, and then sends crafted attack prompts. By evaluating the LLM's responses against predefined rules, it determines the success of an injection attempt. This approach allows for targeted testing of specific vulnerabilities, such as prompt stealing or functional distraction, using customizable rules.
Quick Start & Requirements
pip install -r requirements.txt
after cloning the repository.Highlighted Details
Maintenance & Community
The project was initially released in 2022 and completely rewritten in 2025. Further community or maintenance details are not specified in the README.
Licensing & Compatibility
Limitations & Caveats
The project is a rewrite from 2025, implying potential for early-stage bugs or incomplete features. The GPL-3.0 license may present compatibility challenges for commercial, closed-source use cases.
1 week ago
Inactive