vibe  by thewh1teagle

Offline transcription app using OpenAI Whisper

created 1 year ago
3,076 stars

Top 15.9% on sourcepulse

GitHubView on GitHub
Project Summary

Vibe is an offline, privacy-focused desktop application for transcribing audio and video using OpenAI's Whisper model. It targets users who need accurate, local transcription without sending data to external servers, offering broad language support and various output formats.

How It Works

Vibe leverages the whisper.cpp project for efficient, local execution of Whisper models, enabling offline transcription. It supports GPU acceleration across macOS, Windows, and Linux via Vulkan, CoreML, and CUDA, aiming for high performance. The application also integrates with Ollama for local AI analysis and summarization, and can optionally use the Claude API for cloud-based summarization.

Quick Start & Requirements

  • Install via download from the project's releases page.
  • Requires macOS, Windows, or Linux.
  • GPU acceleration is optimized for Nvidia, AMD, and Intel GPUs.
  • See Vibe Docs for detailed setup.

Highlighted Details

  • Transcribes audio/video from popular websites (YouTube, Vimeo, etc.).
  • Supports batch processing of multiple files.
  • Offers real-time preview and speaker diarization.
  • Includes a CLI for command-line usage and an HTTP API.
  • Can transcribe system audio and microphone input.

Maintenance & Community

  • Active development with community contributions welcomed via PRs.
  • Roadmap available at Vibe-Roadmap.
  • Community support via GitHub issues.

Licensing & Compatibility

  • Primarily licensed under MIT.
  • Compatible with commercial and closed-source applications.

Limitations & Caveats

  • iOS & Android support is listed as "coming soon."
  • Summarization via Claude API requires an API key and incurs costs.
Health Check
Last commit

2 weeks ago

Responsiveness

1 day

Pull Requests (30d)
3
Issues (30d)
18
Star History
762 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.