ps-fuzz  by prompt-security

GenAI security tool to harden system prompts against LLM attacks

Created 1 year ago
562 stars

Top 57.2% on SourcePulse

GitHubView on GitHub
Project Summary

This tool assesses and hardens GenAI applications against prompt-based attacks. It's designed for developers and security professionals seeking to secure their LLM integrations by simulating various adversarial prompts and providing an interactive playground for prompt refinement.

How It Works

The Prompt Fuzzer employs a dynamic testing approach, analyzing the application's system prompt to tailor fuzzing processes. It supports multiple LLM providers and attack types, allowing for both interactive and batch mode testing. The tool simulates attacks like jailbreaking, prompt injection, and system prompt extraction to identify vulnerabilities.

Quick Start & Requirements

Highlighted Details

  • Supports 16 LLM providers and 15 different attack types.
  • Offers both interactive and CLI modes, with multi-threaded testing.
  • Includes a "Playground" for iterative prompt refinement.
  • Detailed attack simulations cover jailbreaking, prompt injection, and system prompt extraction.

Maintenance & Community

Licensing & Compatibility

  • Licensed under MIT.
  • Permissive license suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

Using the tool consumes LLM tokens. Docker support is listed as "coming soon."

Health Check
Last Commit

1 month ago

Responsiveness

1 day

Pull Requests (30d)
1
Issues (30d)
0
Star History
25 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Michele Castata Michele Castata(President of Replit), and
3 more.

rebuff by protectai

0.4%
1k
SDK for LLM prompt injection detection
Created 2 years ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
3 more.

llm-security by greshake

0.1%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
Created 2 years ago
Updated 2 months ago
Starred by Dan Guido Dan Guido(Cofounder of Trail of Bits), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
5 more.

PurpleLlama by meta-llama

0.6%
4k
LLM security toolkit for assessing/improving generative AI models
Created 1 year ago
Updated 1 day ago
Feedback? Help us improve.