PyTorch nn module subset, implemented in Python using Triton
Top 57.9% on sourcepulse
attorch provides a curated subset of PyTorch's neural network modules implemented in Python using OpenAI's Triton. It targets developers seeking a more hackable, efficient, and readable alternative to pure PyTorch for custom deep learning operations, particularly those who find writing raw CUDA kernels challenging. The library supports both forward and backward passes, enabling its use during model training.
How It Works
attorch leverages OpenAI's Triton, a language and compiler for writing high-performance kernels that bridge the gap between Python and CUDA. By implementing core neural network operations (like convolutions, attention, and various activation functions) in Triton, attorch aims to achieve performance improvements over standard PyTorch while maintaining a Pythonic interface. This approach allows for easier customization and potential for operation fusion, which can further boost efficiency.
Quick Start & Requirements
pip install torch==2.4.0 triton==3.0.0
and clone the repository.torch==2.4.0
, triton==3.0.0
.Highlighted Details
attorch.math
module for custom kernel development and operation fusion.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
2 days ago
1 day