unslop  by mshumer

Detect and mitigate AI model output repetition

Created 2 weeks ago

New!

461 stars

Top 65.7% on SourcePulse

GitHubView on GitHub
Project Summary

This project addresses the common issue of AI models defaulting to repetitive patterns in generated text and visual content. It provides a method for empirically detecting these defaults and creating reusable instruction files to guide models toward more unique and specific outputs. The target audience includes engineers, researchers, and power users seeking to enhance the originality and quality of AI-generated content, offering a benefit of reduced generic outputs and improved AI performance.

How It Works

Unslop operates by analyzing a specified domain (text or visual) to identify recurring patterns in AI-generated samples. It prompts an AI model (like Claude Code) to produce numerous outputs based on a generated prompt set. For visual domains, it renders screenshots of generated pages. An analysis pass then inspects these samples for commonalities, such as repeated phrasing, structural elements, or design choices. The core output is a skill.md file, which codifies these detected defaults as instructions on what to avoid, thereby forcing the model to generate fresher, less generic content. A before-and-after comparison visually demonstrates the impact of applying the generated skill file.

Quick Start & Requirements

  • Primary install/run command:
    git clone https://github.com/mshumer/unslop.git
    cd unslop
    python3 -m venv .venv
    source .venv/bin/activate
    
    For text/writing domains: python3 unslop.py --domain "blog writing" For visual domains: pip install playwright && playwright install chromium followed by python3 unslop.py --domain "startup SaaS landing pages" --type visual --count 20 --concurrency 3
  • Non-default prerequisites: Claude Code installation is required. For visual domains, playwright and its browser binaries (chromium) are necessary. Requires Python 3.10+.
  • Links: Repository: https://github.com/mshumer/unslop. Skill file documentation: https://github.com/mshumer/unslop/blob/main/skills/unslop/SKILL.md.

Highlighted Details

  • Empirically measures model defaults rather than relying on guesswork.
  • Generates reusable instruction files (skill.md) compatible with various AI systems (Claude Code, CLAUDE.md, Codex, Cursor).
  • Supports both text-based (writing, code) and visual (websites, HTML) domains.
  • Includes a before/after comparison to objectively assess the effectiveness of the generated skill profile.

Maintenance & Community

No specific details regarding contributors, sponsorships, or community channels (like Discord/Slack) were found in the provided README snippet.

Licensing & Compatibility

The project is licensed under the MIT License. This license is generally permissive and allows for commercial use and integration into closed-source projects without significant restrictions.

Limitations & Caveats

The primary dependency on Claude Code may limit adoption for users without access to or preference for this specific model. The effectiveness of the generated skill.md files is empirical and may vary depending on the complexity of the domain and the underlying AI model's capabilities.

Health Check
Last Commit

2 weeks ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
3
Star History
463 stars in the last 16 days

Explore Similar Projects

Starred by Zhiqiang Xie Zhiqiang Xie(Coauthor of SGLang), Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research), and
3 more.

Trace by microsoft

0.1%
714
AutoDiff-like tool for end-to-end AI agent training with general feedback
Created 1 year ago
Updated 3 months ago
Feedback? Help us improve.