Discover and explore top open-source AI tools and projects—updated daily.
Compiler for ONNX models using MLIR infrastructure
Top 39.8% on SourcePulse
ONNX-MLIR provides a compiler infrastructure to transform ONNX graphs into optimized code for various targets, including CPUs and specialized accelerators. It targets researchers and developers needing to deploy ONNX models efficiently with minimal runtime dependencies, offering flexibility in output formats like LLVM IR, object files, or shared libraries.
How It Works
The project leverages the MLIR (Multi-Level Intermediate Representation) compiler framework and its LLVM backend. It defines an ONNX dialect within MLIR to represent ONNX graphs directly. This allows for staged lowering: ONNX graphs are first translated into the ONNX MLIR dialect, then progressively lowered through various MLIR dialects (e.g., affine, SCF, LLVM) to LLVM IR, which is then compiled to native code. This approach enables sophisticated compiler optimizations and target-specific code generation.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 day ago
1 day