ai-model-bypass  by l0gicx

AI model jailbreak prompts and techniques

created 2 months ago
306 stars

Top 88.6% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a collection of "jailbreak" prompts designed to bypass safety restrictions and elicit unfiltered responses from large language models. It targets users interested in exploring the boundaries of AI behavior and testing model limitations.

How It Works

The project compiles various prompt engineering techniques, primarily focusing on role-playing scenarios and specific instruction sets. These prompts aim to manipulate the AI's context or persona to override its default safety protocols, encouraging it to generate content that would typically be refused.

Quick Start & Requirements

  • Usage: Copy and paste the provided prompts into the respective AI model's chat interface.
  • Prerequisites: Access to the targeted AI models (e.g., ChatGPT, Grok, Gemini, DeepSeek, Meta AI).
  • Note: The README suggests clearing browser cache for optimal results.

Highlighted Details

  • Offers prompts for a wide range of popular LLMs including ChatGPT, Grok, Gemini, DeepSeek, and Meta AI.
  • Includes detailed role-playing scenarios designed to elicit specific behaviors.
  • Features a "Testing Place" section for reporting successful jailbreaks on various models.

Maintenance & Community

  • The repository is maintained by l0gicx.
  • Community contributions are encouraged for reporting successful prompts and suggesting new ones.

Licensing & Compatibility

  • The repository does not explicitly state a license.
  • Compatibility is dependent on the terms of service of the individual AI models being targeted.

Limitations & Caveats

  • The effectiveness of these prompts can vary significantly between models and may be patched by AI providers.
  • Some prompts are noted to work only for the initial interaction.
  • The project's nature involves attempting to bypass AI safety features, which may violate the terms of service of the AI platforms used.
Health Check
Last commit

6 days ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
320 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems) and Pliny the Liberator Pliny the Liberator(Founder of Pliny).

L1B3RT4S by elder-plinius

1.0%
10k
AI jailbreak prompts
created 1 year ago
updated 1 week ago
Starred by Pietro Schirano Pietro Schirano(Founder of MagicPath), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
3 more.

CL4R1T4S by elder-plinius

1.8%
8k
Dataset of system prompts for major AI models + agents
created 5 months ago
updated 4 days ago
Feedback? Help us improve.