allo  by cornell-zhang

Composable hardware accelerator design

Created 2 years ago
278 stars

Top 93.4% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

Allo is a Python-embedded Accelerator Design Language (ADL) and compiler for creating modular, high-performance hardware accelerators. It targets researchers and engineers building complex hardware systems, enabling progressive customization and reusable kernel templates for enhanced productivity and debuggability.

How It Works

Allo leverages MLIR for its intermediate representation, allowing for progressive hardware customizations by treating each transformation as a primitive that rewrites the program. This approach decouples loop transformations, memory, communication, and data types, offering flexibility. It supports reusable, parameterized kernel templates with a concise grammar, simplifying the creation of hardware kernel libraries without complex metaprogramming. Composable schedules allow incremental kernel construction and validation using a .compose() primitive, facilitating bottom-up design.

Quick Start & Requirements

Installation and tutorials are available in the official Allo documentation.

Highlighted Details

  • Built on MLIR for easier backend targeting.
  • Supports reusable, parameterized kernel templates.
  • Enables incremental, composable schedule construction.
  • Cited in PLDI'24 paper.

Maintenance & Community

Users can open issues for problems. Related projects include Exo, Halide, TVM, Dahlia, HeteroCL, PyLog, Spatial HLS, Stream-HLS, ScaleHLS, and MLIR.

Licensing & Compatibility

The license is not explicitly stated in the provided README.

Limitations & Caveats

The README does not detail specific limitations, unsupported platforms, or known bugs. The project appears to be research-oriented, with a PLDI'24 publication.

Health Check
Last Commit

18 hours ago

Responsiveness

Inactive

Pull Requests (30d)
24
Issues (30d)
7
Star History
12 stars in the last 30 days

Explore Similar Projects

Starred by Yaowei Zheng Yaowei Zheng(Author of LLaMA-Factory), Yineng Zhang Yineng Zhang(Inference Lead at SGLang; Research Scientist at Together AI), and
1 more.

VeOmni by ByteDance-Seed

3.4%
1k
Framework for scaling multimodal model training across accelerators
Created 5 months ago
Updated 3 weeks ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Stefan van der Walt Stefan van der Walt(Core Contributor to scientific Python ecosystem), and
12 more.

litgpt by Lightning-AI

0.1%
13k
LLM SDK for pretraining, finetuning, and deploying 20+ high-performance LLMs
Created 2 years ago
Updated 5 days ago
Feedback? Help us improve.