Live2D virtual streamer for Bilibili/Douyin
Top 32.4% on sourcepulse
This project provides an AI-powered virtual streamer (VUP) that can interact with viewers on Bilibili and Douyin live streams. It aims to automate viewer engagement, respond to chat messages and donations, and even perform actions based on viewer behavior, enhancing the interactivity of live streams for creators.
How It Works
The system utilizes a producer-consumer model for efficient processing. It integrates with OpenAI's GPT-3.5 API for natural language understanding and response generation, leveraging OpenAI embeddings for context. For advanced features like voice interaction and action triggering, it relies on libraries such as pyaudio
, speech_recognition
, and pyvts
, connecting to VTube Studio for avatar animation and lip-sync.
Quick Start & Requirements
src
directory, and run pip install -r requirements.txt
.pyaudio
, speech_recognition
, keyboard
, pyvts
, Milvus 2.0 (Docker), MySQL.config.sample.ini
to config.ini
and set api_key
and proxy
.python manager.py run bilibili
(for Bilibili) or python manager.py run douyin
(for Douyin).Highlighted Details
system_template
.Maintenance & Community
The project is marked as "stopped maintenance" by the author, who recommends a new project, Langup. No community links (Discord, Slack) are provided.
Licensing & Compatibility
The README does not explicitly state a license. The project uses libraries with various licenses, and users should verify compatibility for commercial use.
Limitations & Caveats
The project is explicitly stated as "stopped maintenance." The context plugin has significant setup requirements including Dockerized Milvus and a MySQL environment. Action plugin requires manual confirmation pop-ups from VTube Studio.
1 year ago
1 week