Discover and explore top open-source AI tools and projects—updated daily.
Deep learning inference acceleration framework
Top 43.9% on SourcePulse
Adlik is an end-to-end framework designed to accelerate deep learning inference across cloud and embedded environments. It targets engineers and researchers seeking flexible, high-performance deployment of models developed in popular frameworks like TensorFlow, Keras, and PyTorch. Adlik streamlines the optimization and deployment pipeline, enabling efficient inference on diverse hardware platforms.
How It Works
Adlik operates through three core components: a Model Optimizer (featuring pruners and quantizers), a Model Compiler that applies techniques like pruning, quantization, and structural compression, and a Serving Engine that provides optimized runtimes tailored to specific deployment environments. This pipeline allows users to optimize models from various sources (H5, CheckPoint, FrozenGraph, ONNX, SavedModel) into formats like OpenVino, TensorFlow Lite, and TensorRT, which are then served by Adlik's engines for accelerated inference.
Quick Start & Requirements
The quickest way to get started is by using pre-built Docker images available on Alibaba Cloud for both model compilation and serving. Users can pull these images using docker pull registry.cn-beijing.aliyuncs.com/adlik/model-compiler:v1.0
or similar serving image names. Building from source requires Git, Bazel, and Python 3.x, with specific CUDA, cuDNN, and TensorRT versions needed for GPU-enabled builds, and OpenVINO for OpenVINO runtime builds.
Highlighted Details
Maintenance & Community
No specific details regarding maintenance, community channels (like Discord/Slack), or notable contributors were found in the provided text.
Licensing & Compatibility
Adlik is licensed under the Apache License 2.0, which generally permits commercial use and integration into closed-source projects.
Limitations & Caveats
Building Adlik from source involves complex dependency management, particularly for GPU support requiring specific CUDA and TensorRT versions. Additionally, PaddlePaddle models are not supported by the TensorFlow or PyTorch serving engines.
1 year ago
Inactive