Discover and explore top open-source AI tools and projects—updated daily.
On-device ML and GenAI deployment framework
Top 43.9% on SourcePulse
LiteRT is Google's on-device AI framework for deploying machine learning and generative AI models on edge platforms. It offers efficient model conversion, runtime, and optimization, building on the legacy of TensorFlow Lite with enhanced performance and simplified hardware acceleration for developers targeting mobile and embedded systems.
How It Works
LiteRT V2 (Next) introduces a new API designed for streamlined development, featuring automated accelerator selection, true asynchronous execution, and efficient I/O buffer handling. It aims to provide a unified NPU acceleration experience across major chipset providers and best-in-class GPU performance through advanced buffer interoperability for zero-copy operations. The framework also prioritizes superior generative AI inference, simplifying integration and boosting performance for large models.
Quick Start & Requirements
docker_build/build_with_docker.sh
).Highlighted Details
Maintenance & Community
CONTRIBUTING.md
.Licensing & Compatibility
Limitations & Caveats
LiteRT V2 is an alpha release, indicating potential instability and ongoing changes. Some hardware acceleration features (e.g., WebGPU, specific NPU support) are marked as "Coming soon."
15 hours ago
Inactive