Research paper code for defending against AI image manipulation
Top 54.7% on sourcepulse
This repository provides code and methods to defend images against malicious AI-powered editing, specifically targeting Stable Diffusion models. It's designed for researchers and developers working with generative AI, offering techniques to prevent unwanted alterations and ensure image integrity.
How It Works
The project implements adversarial attacks on Stable Diffusion's image generation process. It focuses on perturbing image embeddings (Encoder Attack) or the entire diffusion model (Diffusion Attack) to degrade the quality or alter the content of AI-generated edits. This approach aims to increase the computational cost and difficulty for malicious actors attempting to manipulate images using AI.
Quick Start & Requirements
pip install -r requirements.txt
after cloning the repository and activating a conda
environment with Python 3.10.huggingface-cli login
.cd demo && python app.py
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project's reliance on specific versions of Stable Diffusion and Hugging Face libraries may lead to compatibility issues with future updates. The effectiveness of the "raising the cost" approach is dependent on the computational resources available to potential attackers.
2 years ago
Inactive