Collection of resources for GPT prompt security, jailbreaks, and leaks
Top 17.5% on sourcepulse
This repository serves as a curated collection of resources related to advanced prompt engineering techniques for Large Language Models (LLMs), focusing on "jailbreaking" restrictions, prompt injection, and system prompt leaks. It targets AI researchers, security professionals, and power users seeking to explore the capabilities and vulnerabilities of LLMs like ChatGPT, Gemini, and others. The primary benefit is providing a centralized, categorized hub for cutting-edge prompt strategies and security research.
How It Works
The repository organizes a vast array of links to external GitHub repositories, communities, and articles. It categorizes these resources into distinct areas: Jailbreaks, GPT Agents System Prompt Leaks, Prompt Injection, Secure Prompting, GPTs Lists, Prompts Libraries, Prompt Engineering Resources, Prompt Sources, and specialized "Cyber-Albsecop GPT Agents." This structured approach allows users to quickly navigate and discover relevant information on specific LLM interaction techniques.
Quick Start & Requirements
This is a curated list of links, not a runnable software project. No installation or specific requirements are needed beyond a web browser to access the linked resources.
Highlighted Details
microsoft/promptbench
).Maintenance & Community
The repository is actively maintained by CyberAlbSecOP, with a stated goal to keep it updated and "hot." It highlights community contributions through "Hall Of Fame" entries. Links to relevant Reddit communities are provided for ongoing discussion and discovery.
Licensing & Compatibility
The repository itself is a collection of links and does not have a specific license. The licenses of the linked external repositories vary, and users should consult those individual projects for licensing terms and compatibility.
Limitations & Caveats
As a curated list, the repository's content is dependent on the maintenance and availability of the linked external resources. Some linked jailbreak techniques may become outdated as LLM models are updated. The repository also notes a policy to remove prompts upon request if they are considered "secret."
2 weeks ago
1 week