promptmap  by utkusen

Prompt injection scanner for LLM apps

created 2 years ago
877 stars

Top 41.9% on sourcepulse

GitHubView on GitHub
Project Summary

Promptmap is a vulnerability scanning tool designed to automatically test custom LLM applications for prompt injection attacks. It assists developers and security professionals in identifying and mitigating risks like system prompt leakage or functional distraction within their LLM-based systems.

How It Works

Promptmap operates as a dynamic analysis tool, akin to SAST and DAST in traditional security. It analyzes provided system prompts, executes them against target LLMs, and then sends crafted attack prompts. By evaluating the LLM's responses against predefined rules, it determines the success of an injection attempt. This approach allows for targeted testing of specific vulnerabilities, such as prompt stealing or functional distraction, using customizable rules.

Quick Start & Requirements

  • Install via pip install -r requirements.txt after cloning the repository.
  • Requires Python and API keys for OpenAI or Anthropic models. Local models can be used via Ollama.
  • Ollama must be installed separately for local model execution.
  • Official documentation and examples are available within the repository.

Highlighted Details

  • Supports multiple LLM providers: OpenAI, Anthropic, and open-source models via Ollama.
  • Customizable test rules defined in YAML format.
  • Allows specifying the number of test iterations to uncover latent vulnerabilities.
  • Includes a "firewall testing mode" to assess the efficacy of LLM-based firewalls.

Maintenance & Community

The project was initially released in 2022 and completely rewritten in 2025. Further community or maintenance details are not specified in the README.

Licensing & Compatibility

  • Licensed under GPL-3.0.
  • The GPL-3.0 license is a strong copyleft license, requiring derivative works to also be open-sourced under the same license. This may impose restrictions on integration with closed-source commercial applications.

Limitations & Caveats

The project is a rewrite from 2025, implying potential for early-stage bugs or incomplete features. The GPL-3.0 license may present compatibility challenges for commercial, closed-source use cases.

Health Check
Last commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
98 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
2 more.

llm-security by greshake

0.2%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
created 2 years ago
updated 2 weeks ago
Feedback? Help us improve.