shumai  by facebookresearch

Differentiable tensor library for TypeScript and JavaScript

Created 3 years ago
1,160 stars

Top 33.3% on SourcePulse

GitHubView on GitHub
Project Summary

Shumai is a differentiable tensor library for JavaScript and TypeScript, leveraging Bun and the Flashlight C++ backend. It aims to provide high-performance tensor operations and automatic differentiation for developers and researchers working with machine learning models in a JavaScript environment.

How It Works

Shumai utilizes Bun's fast Foreign Function Interface (FFI) bindings to connect to the Flashlight C++ library, which is built on top of ArrayFire. This architecture allows it to execute computationally intensive tensor operations efficiently, either on the CPU or GPU (via ArrayFire's backends). The library supports automatic differentiation, enabling gradient computation for training neural networks.

Quick Start & Requirements

  • Install: bun install @shumai/shumai
  • Prerequisites: Bun, ArrayFire (CPU backend for macOS, CUDA backend for Linux recommended).
  • OS Support: macOS and Linux. Windows users can use Docker with WSL2.
  • Setup: Requires installing Bun and ArrayFire, which can involve compiling ArrayFire from source on some Linux configurations. See the official ArrayFire installation guide for details.

Highlighted Details

  • Achieves significant performance gains over TF.js on both Apple M1 Pro and Nvidia GP100 hardware for various tensor operations.
  • Offers flexible statistics collection, including console logging, HTTP endpoints for distributed tracing, and custom logger implementations.
  • Supports memory management tuning to optimize performance by controlling garbage collection behavior.
  • Allows for scoped statistics collection to profile specific code segments.

Maintenance & Community

The project is maintained by Facebook Research. Further community engagement details are not explicitly provided in the README.

Licensing & Compatibility

  • License: MIT
  • Compatibility: Permissive MIT license allows for commercial use and integration into closed-source projects.

Limitations & Caveats

This is experimental software. Installation can be complex, especially on Linux requiring manual ArrayFire builds. CPU computation relies on ArrayFire's CPU backend, which is noted as not well-optimized, with OpenCL support planned. Stack tracing for statistics collection incurs a significant performance overhead.

Health Check
Last Commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
1 stars in the last 30 days

Explore Similar Projects

Starred by Luis Capelo Luis Capelo(Cofounder of Lightning AI), Alex Yu Alex Yu(Research Scientist at OpenAI; Former Cofounder of Luma AI), and
7 more.

TransformerEngine by NVIDIA

0.4%
3k
Library for Transformer model acceleration on NVIDIA GPUs
Created 3 years ago
Updated 22 hours ago
Starred by Nat Friedman Nat Friedman(Former CEO of GitHub), Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), and
15 more.

FasterTransformer by NVIDIA

0.1%
6k
Optimized transformer library for inference
Created 4 years ago
Updated 1 year ago
Feedback? Help us improve.