ML implementations from scratch, using NumPy
Top 22.9% on sourcepulse
This repository provides foundational implementations of core machine learning concepts, specifically neural networks and the Transformer architecture, using only NumPy. It's designed for educational purposes, enabling users to understand the underlying mechanics of these powerful models by building them from scratch.
How It Works
The neural network implementation details forward propagation, backpropagation, and gradient descent. It uses a simple feed-forward, multi-layer perceptron structure with ReLU activation and Mean Squared Error loss. The Transformer section breaks down self-attention, multi-head attention, positional encoding, embeddings, and softmax, explaining their roles in contextualizing token representations.
Quick Start & Requirements
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is purely educational and lacks optimizations, advanced layer types, or robust error handling typically found in production ML libraries. It is not intended for practical deployment or large-scale training.
1 week ago
1 day