ChatGPT_DAN  by 0xk1h0

Collection of prompts for jailbreaking ChatGPT

created 2 years ago
9,815 stars

Top 5.2% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a collection of "jailbreak" prompts designed to bypass ChatGPT's safety filters and content policies. The target audience is users seeking to explore the capabilities of large language models beyond their intended restrictions, enabling them to generate uncensored, opinionated, or potentially harmful content.

How It Works

The core mechanism involves instructing ChatGPT to adopt specific personas (e.g., "DAN" - Do Anything Now) that are explicitly designed to disregard OpenAI's content policies. These prompts leverage role-playing and simulated environments to encourage the model to generate responses that would typically be blocked, such as expressing opinions, fabricating information, or engaging with sensitive topics without ethical constraints.

Highlighted Details

  • Persona-Based Evasion: Utilizes detailed persona instructions to override standard AI behavior.
  • Content Policy Bypass: Explicitly instructs the model to ignore OpenAI's content policies, including those related to harmful, unethical, or explicit content.
  • Dual Response Format: Many prompts request both a standard and a "jailbroken" response for comparison.
  • Evolving Prompts: The repository showcases various versions of "DAN" prompts, indicating ongoing efforts to find effective bypass methods.

Maintenance & Community

This is a community-driven collection of prompts, primarily sourced from platforms like Reddit. There is no central maintainer or formal community structure indicated.

Licensing & Compatibility

The repository itself does not specify a license. The prompts are designed to interact with OpenAI's models, and their effectiveness is dependent on the current state of ChatGPT's safety mechanisms.

Limitations & Caveats

The effectiveness of these prompts is highly dependent on the specific version and implementation of ChatGPT being used. OpenAI actively updates its models to patch such bypasses, meaning these prompts may become obsolete. Furthermore, using these prompts to generate harmful or unethical content carries inherent risks and responsibilities for the user.

Health Check
Last commit

11 months ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
3
Star History
947 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.