HappyTorch  by Rivflyyy

PyTorch coding practice platform for deep learning

Created 1 month ago
403 stars

Top 71.9% on SourcePulse

GitHubView on GitHub
Project Summary

A PyTorch coding practice platform, HappyTorch addresses the challenge of deeply understanding deep learning components by providing a hands-on, self-hosted environment. It targets deep learning learners and engineers preparing for ML interviews, offering instant auto-grading and feedback. The platform enables users to practice implementing core components from LLMs to Diffusion models, fostering practical knowledge without requiring specialized hardware.

How It Works

HappyTorch functions as a "LeetCode for tensors," offering 36 curated coding problems that require manual implementation of PyTorch components. It provides two primary interfaces: a LeetCode-style Web UI featuring the Monaco Editor and traditional Jupyter notebooks. Users implement solutions using basic PyTorch operations, which are then automatically judged via an in-notebook API (check, hint, status). This approach emphasizes correctness and understanding of algorithms and numerical stability over raw performance, with a key advantage being that no GPU is required for any of the exercises.

Quick Start & Requirements

Setup involves creating a Conda environment with Python 3.11, installing dependencies (torch CPU version, jupyterlab, numpy, fastapi, uvicorn, python-multipart), and then installing HappyTorch in editable mode (pip install -e .). Running python prepare_notebooks.py is necessary before launching. The Web UI is started with python start_web.py (accessible at http://localhost:8000), and Jupyter mode with python start_jupyter.py (accessible at http://localhost:8888). Docker support is also available via make run or docker compose up -d. No GPU is required.

Highlighted Details

  • Features 36 curated problems spanning fundamentals (ReLU, Softmax), attention mechanisms (MHA, GQA), modern activations (GELU, SwiGLU), parameter-efficient fine-tuning (LoRA, DoRA), diffusion components (AdaLN), LLM inference (RoPE, KV Cache), and RLHF algorithms.
  • Provides instant, detailed auto-grading feedback on correctness, numerical stability, and gradient flow for each test case.
  • Offers dual interfaces: a feature-rich Web UI with a Monaco editor and standard Jupyter notebooks.
  • Includes helpful hints and access to reference solutions for learning.
  • Tracks user progress locally in data/progress.json.

Maintenance & Community

Recent updates (March 2026) include bug fixes for notebook matching and class-based tasks, enhanced Docker image support, improved Web UI organization, and the addition of new community-contributed problems like MLP XOR training and ML/RLHF exercises. The project actively acknowledges community contributions.

Licensing & Compatibility

HappyTorch is released under the MIT License, which is permissive and generally suitable for commercial use and integration into closed-source projects.

Limitations & Caveats

The platform focuses on correctness and understanding of individual deep learning components rather than performance benchmarking or throughput optimization. Progress data is stored locally as a JSON file.

Health Check
Last Commit

1 week ago

Responsiveness

Inactive

Pull Requests (30d)
4
Issues (30d)
1
Star History
191 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.