ps-fuzz  by prompt-security

GenAI security tool to harden system prompts against LLM attacks

created 1 year ago
526 stars

Top 60.9% on sourcepulse

GitHubView on GitHub
Project Summary

This tool assesses and hardens GenAI applications against prompt-based attacks. It's designed for developers and security professionals seeking to secure their LLM integrations by simulating various adversarial prompts and providing an interactive playground for prompt refinement.

How It Works

The Prompt Fuzzer employs a dynamic testing approach, analyzing the application's system prompt to tailor fuzzing processes. It supports multiple LLM providers and attack types, allowing for both interactive and batch mode testing. The tool simulates attacks like jailbreaking, prompt injection, and system prompt extraction to identify vulnerabilities.

Quick Start & Requirements

Highlighted Details

  • Supports 16 LLM providers and 15 different attack types.
  • Offers both interactive and CLI modes, with multi-threaded testing.
  • Includes a "Playground" for iterative prompt refinement.
  • Detailed attack simulations cover jailbreaking, prompt injection, and system prompt extraction.

Maintenance & Community

Licensing & Compatibility

  • Licensed under MIT.
  • Permissive license suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

Using the tool consumes LLM tokens. Docker support is listed as "coming soon."

Health Check
Last commit

3 days ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
1
Star History
58 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
2 more.

llm-security by greshake

0.2%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
created 2 years ago
updated 2 weeks ago
Feedback? Help us improve.