Video-Wrapper-Skills  by op7418

AI-powered video enhancement for dynamic visual effects

Created 2 months ago
270 stars

Top 95.1% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides an AI-powered Claude Skill to automatically add variety-show-style visual effects to interview and podcast videos. It targets content creators, educators, and social media managers looking to enhance engagement by transforming raw footage into polished, dynamic videos with minimal manual effort. The primary benefit is the automation of complex visual post-production tasks through intelligent subtitle analysis and one-click rendering.

How It Works

The system employs a smart workflow: an AI analyzes subtitle content to identify key information such as phrases, terminology, quotes, and guest details. Based on this analysis, it auto-generates suggestions for visual effects, which the user can then approve. The project offers two rendering engines: a recommended browser-based engine using Playwright, HTML, CSS, and Anime.js for high-quality, complex animations, and a fallback pure Python PIL engine for faster, browser-less rendering. This dual-engine approach balances visual fidelity with processing speed.

Quick Start & Requirements

Installation can be done via a one-click command: npx skills add https://github.com/op7418/Video-Wrapper-Skills. Alternatively, manual installation involves cloning the repository, activating a Python 3.8+ environment, installing dependencies with pip install -r requirements.txt, and running playwright install chromium. Usage within Claude is initiated with /video-wrapper interview.mp4 subtitles.srt. Command-line rendering requires specifying input files and an optional configuration JSON, with the renderer selectable via -r browser or -r pil.

Highlighted Details

  • Visual Components: Offers 8 distinct components including Key Phrases, Lower Thirds, Chapter Titles, Term Cards, Quote Callouts, Animated Stats, Bullet Points, and Social Bars.
  • Visual Themes: Supports 4 curated themes (Notion, Cyberpunk, Apple, Aurora) with distinct color schemes and styles tailored for different content types.
  • AI-Powered Workflow: Automates effect suggestion generation based on subtitle analysis, streamlining the editing process.
  • Dual Rendering Engines: Provides flexibility with a high-fidelity browser renderer and a faster PIL fallback.

Maintenance & Community

The project encourages contributions via issues and pull requests. Specific details regarding maintainers, sponsorships, or dedicated community channels (like Discord/Slack) are not explicitly detailed in the README.

Licensing & Compatibility

The project is released under the MIT License, which permits commercial use and integration into closed-source projects.

Limitations & Caveats

Processing can be slow, particularly with the browser renderer; users are advised to optimize by using the PIL renderer, lowering video resolution, processing videos in segments, or reducing the number of components. The system may also consume significant memory, with similar mitigation strategies applicable. Proper font display, especially for CJK characters, requires system-level font installations (e.g., fonts-noto-cjk on Ubuntu). A clear distinction is noted between "Key Phrases" (for short phrases) and "Term Cards" (for single words/definitions) to ensure correct component usage.

Health Check
Last Commit

2 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
89 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.