LLM SDK for pretraining, finetuning, and deploying 20+ high-performance LLMs
Top 4.0% on sourcepulse
LitGPT provides over 20 high-performance Large Language Models (LLMs) with comprehensive recipes for pretraining, finetuning, and deployment. It targets developers and researchers seeking efficient, scalable, and customizable LLM solutions, offering a no-abstraction, beginner-friendly approach for enterprise-grade applications.
How It Works
LitGPT implements LLMs from scratch, prioritizing performance and minimal abstractions. It leverages PyTorch Lightning Fabric for distributed training across GPUs and TPUs, supporting advanced techniques like Flash Attention v2, Fully Sharded Data Parallelism (FSDP), and parameter-efficient finetuning methods (LoRA, QLoRA, Adapters). This design enables reduced memory usage through quantization (4-bit, 8-bit) and mixed-precision training (FP16, BF16), facilitating efficient operation on lower-memory GPUs and at scale.
Quick Start & Requirements
pip install 'litgpt[all]'
from litgpt import LLM; llm = LLM.load("microsoft/phi-2"); llm.generate(...)
Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
Some model downloads may require an additional access token, as detailed in the documentation. The project is built upon Lightning Fabric, extending nanoGPT and Lit-LLaMA.
1 week ago
1 day