Discover and explore top open-source AI tools and projects—updated daily.
Decrypted Apple Intelligence safety filters
Top 89.6% on SourcePulse
This repository provides decrypted safety filter files for Apple Intelligence's generative models, enabling researchers and power users to examine the mechanisms behind content moderation. It offers tools to extract, decrypt, and consolidate these filters, facilitating analysis of Apple's approach to AI safety.
How It Works
The project leverages Python scripts to decrypt proprietary safety override files used by Apple's generative models. It requires obtaining an encryption key by attaching Xcode's LLDB debugger to a specific Apple system process (GenerativeExperiencesSafetyInferenceProvider
). Once the key is acquired, a Python script decrypts the asset files, which are then processed by another script to combine and deduplicate metadata into human-readable JSON formats, categorized by region and locale.
Quick Start & Requirements
cryptography
via pip: pip install cryptography
/System/Library/ExtensionKit/Extensions/GenerativeExperiencesSafetyInferenceProvider.appex/Contents/MacOS/GenerativeExperiencesSafetyInferenceProvider
.HOW.md
.Highlighted Details
reject
, remove
, and regexReject
rules for generative model outputs.com.apple.gm.safety_deny.output.code_intelligence.base
.Maintenance & Community
No specific community channels, contributors, or roadmap are mentioned in the README.
Licensing & Compatibility
The repository does not explicitly state a license. The presence of Apple's proprietary system files suggests potential legal and compatibility issues for redistribution or commercial use.
Limitations & Caveats
The process requires specific Apple hardware and software (Xcode LLDB), and relies on internal system file paths that may change with OS updates. The licensing status of the decrypted files is unclear, posing a risk for any use beyond personal analysis.
3 weeks ago
Inactive