awesome-tensor-compilers  by merrymercy

Curated list of tensor compiler projects and papers

Created 5 years ago
2,661 stars

Top 17.7% on SourcePulse

GitHubView on GitHub
Project Summary

This repository curates a comprehensive collection of open-source projects, research papers, and tutorials focused on compilers for tensor computation and deep learning. It serves as a valuable resource for researchers, engineers, and practitioners seeking to understand and leverage advanced compilation techniques for AI hardware acceleration and performance optimization. The list aims to provide a structured overview of the rapidly evolving field of deep learning compilers.

How It Works

As a curated list, this repository doesn't have a single operational mechanism. Instead, it categorizes and links to various projects and papers that address tensor compilation. These projects typically involve intermediate representations (IRs), auto-tuning, code generation, and optimization strategies tailored for deep learning workloads across diverse hardware architectures (CPUs, GPUs, NPUs). The underlying goal is to bridge the gap between high-level deep learning models and efficient low-level hardware execution.

Quick Start & Requirements

This repository is a curated list and does not have a direct installation or execution process. Users are directed to individual projects within the list for their specific setup instructions, requirements (e.g., Python versions, specific hardware like GPUs/CUDA, dependencies), and quick-start guides. Links to official documentation, demos, and tutorials for many listed projects are provided.

Highlighted Details

  • Extensive coverage of foundational and state-of-the-art tensor compilers including TVM, MLIR, XLA, Halide, Glow, nnfusion, and Triton.
  • Categorized lists of research papers covering compiler design, optimization techniques (auto-tuning, cost modeling, CPU/GPU/NPU optimization), and emerging areas like sparse computation and dynamic models.
  • Sections dedicated to specific optimization challenges such as quantization, sparse tensor algebra, graph-level optimizations, and distributed computing for large-scale AI.
  • Includes resources on verification, testing, and tutorials for machine learning compilation.

Maintenance & Community

The repository actively encourages community contributions through GitHub issues and pull requests, indicating a collaborative development model. Specific details on maintainers, sponsors, or community channels (like Discord/Slack) are not explicitly provided in the README excerpt.

Licensing & Compatibility

The repository itself, being a list, does not impose a license. However, the individual open-source projects and papers referenced within the list will have their own respective licenses, which users must consult for compatibility, especially for commercial use or integration into closed-source systems.

Limitations & Caveats

As an "awesome list," its primary limitation is that it is a pointer to resources rather than a unified tool. The rapid pace of development in the field means the list may require frequent updates to remain fully current. Users must independently evaluate the maturity, stability, and specific requirements of each project linked.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
19 stars in the last 30 days

Explore Similar Projects

Starred by Shengjia Zhao Shengjia Zhao(Chief Scientist at Meta Superintelligence Lab), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
14 more.

BIG-bench by google

0.1%
3k
Collaborative benchmark for probing and extrapolating LLM capabilities
Created 4 years ago
Updated 1 year ago
Starred by Aravind Srinivas Aravind Srinivas(Cofounder of Perplexity), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
16 more.

text-to-text-transfer-transformer by google-research

0.1%
6k
Unified text-to-text transformer for NLP research
Created 6 years ago
Updated 5 months ago
Feedback? Help us improve.