Avatar generator for creating animatable 3D Gaussian heads from a single image
Top 50.8% on sourcepulse
LAM is a PyTorch-based framework for generating ultra-realistic, animatable 3D avatars from a single image in seconds. It targets researchers and developers building interactive 3D applications, offering fast cross-platform animation and rendering with a low-latency SDK for real-time chatting avatars.
How It Works
LAM leverages Gaussian Splatting for high-fidelity 3D avatar representation. The core innovation lies in its "Large Avatar Model" architecture, enabling one-shot creation and efficient animation. This approach allows for rapid generation and real-time performance, making it suitable for interactive experiences.
Quick Start & Requirements
git clone https://github.com/aigc3d/LAM.git && cd LAM && sh ./scripts/install/install_cu121.sh
(or install_cu118.sh
for CUDA 11.8).python app_lam.py
.Highlighted Details
Maintenance & Community
The project is from Tongyi Lab, Alibaba Group. It has active development with recent releases of export features, WebGL SDK, and Audio2Expression. A roadmap is partially outlined with planned releases for larger models and cross-platform rendering.
Licensing & Compatibility
The repository does not explicitly state a license in the README. This requires clarification for commercial use or integration into closed-source projects.
Limitations & Caveats
The project is presented as a SIGGRAPH 2025 submission, implying it may be research-oriented and potentially subject to rapid changes or experimental stability. Specific model weights (LAM-large) are still pending release.
2 months ago
1 day