Arraymancer  by mratsim

Nim tensor library for scientific computing and deep learning

created 8 years ago
1,376 stars

Top 29.9% on sourcepulse

GitHubView on GitHub
Project Summary

Arraymancer is a Nim-based N-dimensional array (tensor) library designed for high-performance numerical computing and deep learning. It offers an ergonomic syntax inspired by NumPy and PyTorch, targeting researchers and developers who need a fast, portable solution for CPU, CUDA, and OpenCL backends.

How It Works

Arraymancer leverages Nim's fast compilation and metaprogramming capabilities to provide a high-level, Python-like API with C-level performance. It supports multiple backends (CPU with OpenMP, CUDA, OpenCL) and allows for custom BLAS/LAPACK library integration. The library includes automatic differentiation for deep learning tasks, enabling the definition and training of neural networks with a concise syntax.

Quick Start & Requirements

  • Install: nimble install arraymancer or nimble install arraymancer@#head for the development version.
  • Prerequisites: A BLAS and LAPACK library (e.g., OpenBLAS, MKL, Apple Accelerate). CUDA and CuDNN are required for GPU acceleration.
  • Documentation: Arraymancer official documentation (Note: Documentation may only be updated for release versions; check the examples folder for the latest features).

Highlighted Details

  • Supports tensors up to 6 dimensions.
  • Includes deep learning primitives like Conv2D, MaxPool2D, Linear, and GRULayer.
  • Offers features like broadcasting, explicit slicing, reshaping, and file I/O (.csv, .npy, HDF5).
  • Can compile to a self-contained binary with minimal dependencies.

Maintenance & Community

The project's last update mentioned in the README was July 2019 (v0.5.1). Further community engagement channels are not explicitly listed.

Licensing & Compatibility

The README does not explicitly state a license. Given the lack of a LICENSE file or explicit mention, users should exercise caution regarding commercial use and closed-source linking.

Limitations & Caveats

The deep learning features are noted as unstable and subject to interface changes. CUDA and OpenCL tensor implementations are less feature-complete than CPU tensors, with limitations on data types and operations like iteration or slicing mutations. The project's development activity appears to have slowed significantly since 2019.

Health Check
Last commit

5 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
14 stars in the last 90 days

Explore Similar Projects

Starred by Nat Friedman Nat Friedman(Former CEO of GitHub), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
6 more.

FasterTransformer by NVIDIA

0.2%
6k
Optimized transformer library for inference
created 4 years ago
updated 1 year ago
Starred by Bojan Tunguz Bojan Tunguz(AI Scientist; Formerly at NVIDIA), Mckay Wrigley Mckay Wrigley(Founder of Takeoff AI), and
8 more.

ggml by ggml-org

0.3%
13k
Tensor library for machine learning
created 2 years ago
updated 3 days ago
Feedback? Help us improve.