byteir  by bytedance

Model compilation solution for diverse hardware

Created 2 years ago
448 stars

Top 67.2% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

ByteIR is a ByteDance-developed, end-to-end model compilation solution for deep learning accelerators, CPUs, and GPUs. It targets researchers and developers building custom AI hardware or optimizing models for diverse platforms, offering a flexible, MLIR-based framework to streamline the compilation pipeline.

How It Works

ByteIR leverages MLIR (Multi-Level Intermediate Representation) and Google's Stablehlo dialect as its core IR. This approach allows for modularity, enabling independent use of its compiler, runtime, and frontends (TensorFlow, PyTorch, ONNX). The compiler provides generic optimizations at graph, loop, and tensor levels, compatible with upstream MLIR and Stablehlo passes, allowing users to focus on backend-specific finalization. Communication between components uses Stablehlo for frontend-compiler and a custom ByRE format (textual or bytecode) for compiler-runtime.

Quick Start & Requirements

  • Installation: Not explicitly detailed in the README, but likely involves building from source given its MLIR foundation.
  • Prerequisites: MLIR, Stablehlo, TensorFlow, PyTorch, ONNX. Specific version requirements are not listed.
  • Resources: Likely requires significant build time and potentially large datasets for testing.
  • Links: ByteIR Project English, 中文

Highlighted Details

  • End-to-end solution with modular compiler, runtime, and frontends.
  • MLIR-based, utilizing upstream dialects and Google Stablehlo for broad compatibility.
  • Supports TensorFlow, PyTorch, and ONNX frontends.
  • Provides generic optimizations reusable by DL ASIC compilers.

Maintenance & Community

  • Developed by ByteDance researchers and interns.
  • Early phase project with a focus on foundational infrastructure.
  • Feedback and contributions for specific architecture prioritization are welcomed.
  • Public talks available: C4ML'23, China SoftCon'23.

Licensing & Compatibility

  • License: Apache License v2.0.
  • Compatibility: Permissive license suitable for commercial use and integration with closed-source projects.

Limitations & Caveats

ByteIR is in its early phase, and highly-tuned kernels for specific architectures are not yet prioritized. While compatible with upstream MLIR and Stablehlo, specific version dependencies between components might require careful management during development.

Health Check
Last Commit

4 weeks ago

Responsiveness

1 week

Pull Requests (30d)
1
Issues (30d)
0
Star History
8 stars in the last 30 days

Explore Similar Projects

Starred by Yaowei Zheng Yaowei Zheng(Author of LLaMA-Factory), Yineng Zhang Yineng Zhang(Inference Lead at SGLang; Research Scientist at Together AI), and
1 more.

VeOmni by ByteDance-Seed

3.4%
1k
Framework for scaling multimodal model training across accelerators
Created 5 months ago
Updated 3 weeks ago
Feedback? Help us improve.