MegEngine  by MegEngine

Deep learning framework for training and inference

created 5 years ago
4,805 stars

Top 10.6% on sourcepulse

GitHubView on GitHub
Project Summary

MegEngine is a deep learning framework designed for both training and inference, targeting developers and researchers who need a fast, scalable, and user-friendly solution. It offers a unified model for both stages, simplifying deployment and enabling features like quantization and dynamic shape processing with a single model.

How It Works

MegEngine utilizes a Pushdown memory planner and a DTR (Dynamic Tensor Runtime) algorithm to significantly reduce GPU memory usage, potentially to one-third of the original. This approach allows for efficient inference across diverse hardware platforms including x86, Arm, CUDA, and RoCM, supporting operating systems like Linux, Windows, iOS, and Android.

Quick Start & Requirements

  • Installation: python3 -m pip install megengine -f https://megengine.org.cn/whl/mge.html
  • Prerequisites: Python 3.6-3.9. Supports Linux-64bit, Windows-64bit, and macOS (CPU-Only) 10.14+. Windows users can use WSL or a native Windows distribution.
  • Resources: Pre-built binaries via pip. Building from source requires CMake.
  • Documentation: MegEngine Documentation (Chinese)

Highlighted Details

  • Unified framework for training and inference.
  • Low memory usage via DTR algorithm and Pushdown memory planner.
  • Efficient inference across x86, Arm, CUDA, RoCM, and various OS (Linux, Windows, iOS, Android).
  • Supports quantization, dynamic shapes, and automatic differentiation.

Maintenance & Community

  • Community guidelines follow the Contributor Covenant.
  • Requires signing a Contributor License Agreement (CLA) for contributions.
  • Contact: GitHub Issues, Email: megengine-support@megvii.com, Forum: discuss.megengine.org.cn, QQ Group: 1029741705.

Licensing & Compatibility

  • Licensed under the Apache License, Version 2.0.
  • Permissive license suitable for commercial use and closed-source linking.

Limitations & Caveats

The README specifies Python 3.6-3.9 support, which may be outdated. macOS support is CPU-only. While inference is supported on many platforms, training capabilities might be more restricted.

Health Check
Last commit

9 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
1
Star History
22 stars in the last 90 days

Explore Similar Projects

Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Tim J. Baek Tim J. Baek(Founder of Open WebUI), and
5 more.

gemma.cpp by google

0.1%
7k
C++ inference engine for Google's Gemma models
created 1 year ago
updated 1 day ago
Starred by Bojan Tunguz Bojan Tunguz(AI Scientist; Formerly at NVIDIA), Mckay Wrigley Mckay Wrigley(Founder of Takeoff AI), and
8 more.

ggml by ggml-org

0.3%
13k
Tensor library for machine learning
created 2 years ago
updated 3 days ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Nat Friedman Nat Friedman(Former CEO of GitHub), and
32 more.

llama.cpp by ggml-org

0.4%
84k
C/C++ library for local LLM inference
created 2 years ago
updated 16 hours ago
Feedback? Help us improve.