distributed-training-guide  by LambdaLabsML

PyTorch guide for distributed training of large language models

created 1 year ago
460 stars

Top 66.8% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides a comprehensive guide to distributed PyTorch training, targeting ML engineers and researchers working with large neural networks and clusters. It offers best practices for scaling single-GPU training scripts to multi-GPU and multi-node setups, diagnosing common errors, and optimizing memory usage with techniques like FSDP and Tensor Parallelism.

How It Works

The guide progresses through sequential chapters, each building upon the previous one. It starts with a basic single-GPU causal LLM training script and incrementally introduces distributed training concepts and PyTorch implementations, including Data Parallelism (DDP), Fully Sharded Data Parallelism (FSDP), and Tensor Parallelism (TP). The approach emphasizes using minimal, standard PyTorch for distributed logic, avoiding external libraries for core distributed operations.

Quick Start & Requirements

  • Install: Clone the repository, create and activate a virtual environment, and install dependencies using pip install -r requirements.txt flash-attn --no-build-isolation wandb.
  • Prerequisites: Python 3.x, PyTorch, Transformers, Datasets, flash-attn, wandb for experiment tracking. A wandb login is required.
  • Setup Time: Minimal, primarily dependency installation.
  • Resources: Requires access to multi-GPU/multi-node clusters for full utilization.
  • Docs: Neurips 2024 presentation slides

Highlighted Details

  • Step-by-step progression from single-GPU to complex 2D parallelism (FSDP + TP).
  • Focus on diagnosing common cluster training errors and best practices for logging.
  • Demonstrates training large models like Llama-405b.
  • Covers alternative PyTorch-based distributed frameworks.

Maintenance & Community

The project is from Lambda Labs. Links to other Lambda ML projects are provided: ML Times, Text2Video, GPU Benchmark.

Licensing & Compatibility

The repository does not explicitly state a license in the provided README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The guide focuses exclusively on PyTorch for distributed training and does not cover other frameworks like TensorFlow or JAX. While it aims for minimal dependencies, flash-attn is a significant external requirement for optimal performance. The guide's primary focus is on causal language models.

Health Check
Last commit

5 months ago

Responsiveness

1+ week

Pull Requests (30d)
0
Issues (30d)
1
Star History
57 stars in the last 90 days

Explore Similar Projects

Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake).

fms-fsdp by foundation-model-stack

0.4%
258
Efficiently train foundation models with PyTorch
created 1 year ago
updated 1 week ago
Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake) and Zhiqiang Xie Zhiqiang Xie(Author of SGLang).

veScale by volcengine

0.1%
839
PyTorch-native framework for LLM training
created 1 year ago
updated 3 weeks ago
Starred by Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera) and Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake).

InternEvo by InternLM

1.0%
402
Lightweight training framework for model pre-training
created 1 year ago
updated 1 week ago
Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake) and Travis Fischer Travis Fischer(Founder of Agentic).

lingua by facebookresearch

0.1%
5k
LLM research codebase for training and inference
created 9 months ago
updated 2 weeks ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Zhuohan Li Zhuohan Li(Author of vLLM), and
6 more.

torchtitan by pytorch

0.9%
4k
PyTorch platform for generative AI model training research
created 1 year ago
updated 22 hours ago
Feedback? Help us improve.