Live recording/uploading tool for Bilibili, with MLLM integration
Top 17.1% on sourcepulse
This project provides an automated Bilibili live stream recording and content creation pipeline, targeting users who want to capture, process, and re-upload live content with minimal manual intervention. It aims to simplify the workflow from recording to uploading, including advanced features like automatic subtitle generation and clip creation, even on low-configuration hardware.
How It Works
The engine utilizes a pipeline architecture for efficient processing, aiming for near real-time recording and uploading. Key features include automatic danmaku (bullet comment) conversion and rendering into videos, speech-to-text via OpenAI's Whisper for subtitle generation, and AI-powered automatic clip creation based on danmaku density. It also supports AI-generated video covers and automatic uploading of both full recordings and clips to Bilibili, with options for multi-platform live streaming.
Quick Start & Requirements
git clone --recurse-submodules
) and install dependencies via pip install -r requirements.txt
. Ensure FFmpeg is installed.bilive.toml
and settings.toml
. Docker images are available for both CPU and GPU (amd64 only) setups.Highlighted Details
bilitool
) for seamless uploading and supports multi-part video uploads.Maintenance & Community
The project actively develops and integrates new AI models. Links to issue reporting and a chat group (via image) are provided.
Licensing & Compatibility
The project is released under the MIT License. It is designed for personal learning and exchange; unauthorized commercial use or large-scale recording is discouraged due to potential platform restrictions.
Limitations & Caveats
The arm64 version of the Docker image does not support local Whisper deployment due to Triton library compatibility issues. Users must ensure sufficient GPU VRAM for local Whisper deployment. API-based features are subject to third-party rate limits and costs.
1 month ago
1 day