mindspore  by mindspore-ai

Deep learning framework for mobile, edge, and cloud training/inference

Created 5 years ago
4,629 stars

Top 10.7% on SourcePulse

GitHubView on GitHub
Project Summary

MindSpore is an open-source deep learning framework designed for mobile, edge, and cloud scenarios, targeting data scientists and algorithmic engineers. It offers native support for Ascend AI processors and aims for software-hardware co-optimization, providing a friendly development experience and efficient execution.

How It Works

MindSpore employs Source Transformation (ST) for automatic differentiation, contrasting with Operator Overloading (OO) used by frameworks like PyTorch. ST enables static compilation optimization and handles complex control flows natively, leading to potential performance gains. Its automatic parallelization supports data, model, and hybrid parallelism through fine-grained operator splitting, abstracting complexity from the user.

Quick Start & Requirements

  • Installation: Primarily via pip with pre-built wheels or by compiling from source. Docker images are also available.
  • Dependencies: Supports Ascend, NVIDIA GPU (CUDA 10.1), and CPU. Specific CUDA versions may be required for GPU builds.
  • Resources: Installation details and build options for various platforms are available in the installation guide.

Highlighted Details

  • Native support for Ascend AI processors.
  • Automatic differentiation via Source Transformation (ST) for performance and control flow handling.
  • Automatic parallelization for distributed training.
  • Available via pip, source compilation, and Docker images.

Maintenance & Community

  • Active maintenance with multiple branches (e.g., r2.2, r2.1, r2.0 are "Maintained"). Older branches are marked "End Of Life".
  • Community communication via Slack. Contribution guidelines are available.

Licensing & Compatibility

  • Licensed under the Apache License 2.0.
  • Permissive license suitable for commercial use and integration with closed-source projects.

Limitations & Caveats

  • GPU support is specified for CUDA 10.1, which may be outdated. Newer CUDA versions might require source compilation.
  • The maintenance status indicates a clear lifecycle for branches, with older versions reaching End Of Life.
Health Check
Last Commit

1 year ago

Responsiveness

1 week

Pull Requests (30d)
0
Issues (30d)
1
Star History
16 stars in the last 30 days

Explore Similar Projects

Starred by Tri Dao Tri Dao(Chief Scientist at Together AI), Stas Bekman Stas Bekman(Author of "Machine Learning Engineering Open Book"; Research Engineer at Snowflake), and
1 more.

oslo by tunib-ai

0%
309
Framework for large-scale transformer optimization
Created 3 years ago
Updated 3 years ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Luis Capelo Luis Capelo(Cofounder of Lightning AI), and
3 more.

LitServe by Lightning-AI

1.2%
4k
AI inference pipeline framework
Created 1 year ago
Updated 8 hours ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Jiayi Pan Jiayi Pan(Author of SWE-Gym; MTS at xAI), and
20 more.

alpa by alpa-projects

0.0%
3k
Auto-parallelization framework for large-scale neural network training and serving
Created 4 years ago
Updated 1 year ago
Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Li Jiang Li Jiang(Coauthor of AutoGen; Engineer at Microsoft), and
27 more.

ColossalAI by hpcaitech

0.0%
41k
AI system for large-scale parallel training
Created 4 years ago
Updated 3 weeks ago
Feedback? Help us improve.