Unity-based real-time 3D avatar
Top 77.5% on sourcepulse
This project provides an open-source, real-time 3D digital human powered by Unity, targeting developers and researchers interested in creating interactive AI-driven characters. It integrates speech recognition, LLM-based conversational AI, and text-to-speech with lip-syncing for a lifelike experience.
How It Works
The system processes user microphone input through speech recognition, feeds the text to a chosen LLM API for response generation, and then uses text-to-speech (TTS) for audio output. Lip synchronization is achieved using the uLipSync package, ensuring mouth movements match the synthesized speech. The architecture supports various LLM APIs and is built on Unity's Universal Render Pipeline (URP) for cross-platform compatibility.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
The project is maintained by LKZMuZiLi. Community engagement is encouraged via WeChat for group invitations.
Licensing & Compatibility
The specific license is not explicitly stated in the README, but the project is presented as an open-source version derived from a primary project. Compatibility for commercial use or closed-source linking would require clarification on licensing terms.
Limitations & Caveats
Users must configure LLM API details themselves. Older Unity versions may require manual handling of URP package errors. The project relies on external services for LLM and TTS, and the availability and performance of these services are external dependencies.
4 months ago
1 day