Curated list of embedded AI resources, tools, and reports
Top 76.2% on sourcepulse
This repository is a curated list of resources, tools, and papers focused on embedded AI, targeting researchers, developers, and power users in the field of model compression, quantization, and mobile inference acceleration. It aims to provide a comprehensive overview of the latest advancements and practical applications in making AI models efficient for edge devices.
How It Works
The project functions as an "awesome list," aggregating links to papers, code repositories, frameworks, benchmarks, and tutorials. It categorizes these resources by topic, including model compression techniques (quantization, pruning, low-rank approximation, distillation), execution frameworks (ncnn, TensorFlow Lite, CoreML), hardware benchmarks (Qualcomm Adreno GPUs), and platform-specific implementations (Android, iOS). This structured approach allows users to quickly find relevant information on specific aspects of embedded AI development.
Quick Start & Requirements
This repository is a curated list and does not have a direct installation or execution command. Users can browse the Markdown file for links to external resources.
Highlighted Details
Maintenance & Community
The project is actively seeking contributors and encourages submissions via pull requests. The primary contact is via WeChat ID: NeuralTalk.
Licensing & Compatibility
The repository itself is a collection of links and does not appear to have a specific license. The licenses of the linked external resources vary.
Limitations & Caveats
As a curated list, the quality and maintenance of linked external resources are not guaranteed by this repository. The information may become outdated as the field evolves rapidly.
3 years ago
1+ week