CoLLiE  by OpenMOSS

LLM training toolkit for efficient collaborative tuning

created 2 years ago
417 stars

Top 71.3% on sourcepulse

GitHubView on GitHub
Project Summary

CoLLiE is a comprehensive toolkit for training large language models (LLMs) from scratch, designed for researchers and practitioners. It streamlines the entire LLM training pipeline, from data preprocessing and fine-tuning to model saving and metric monitoring, aiming to accelerate training, improve quality, and reduce costs.

How It Works

CoLLiE builds upon DeepSpeed and PyTorch, integrating advanced parallelization strategies (DP, PP, TP, ZeRO) with efficient fine-tuning methods like LOMO and LoRA, and Flash Attention. This combination allows for collaborative and efficient LLM tuning, offering a user-friendly interface with highly customizable options for both beginners and experienced users.

Quick Start & Requirements

  • Install: pip install collie-lm
  • Prerequisites: PyTorch >= 1.13, CUDA >= 11.6, Linux OS.
  • Setup: Installation is straightforward via pip. The README provides a detailed example for training the MOSS model using LOMO and ZeRO-3, requiring torchrun for distributed training.
  • Docs: https://github.com/OpenMOSS/CoLLiE (Examples and tutorials are linked within the README).

Highlighted Details

  • Supports major LLM architectures including MOSS, InternLM, LLaMA, and ChatGLM.
  • Integrates efficient techniques like LOMO, LoRA, and Flash Attention.
  • Offers robust monitoring tools for step time, token generation speed, memory usage, and loss.
  • Includes evaluators for perplexity and generation metrics.

Maintenance & Community

The project has been accepted into EMNLP System Demonstrations (Dec 2023). Community links are not explicitly provided in the README, but a "Community" section is present.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The README mentions "zero_allow_untested_optimizer" in the DeepSpeed configuration, suggesting potential instability with certain optimizer configurations. Specific hardware benchmarks are provided, but general performance claims are not quantified across all supported models and configurations.

Health Check
Last commit

11 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 90 days

Explore Similar Projects

Starred by Stas Bekman Stas Bekman(Author of Machine Learning Engineering Open Book; Research Engineer at Snowflake) and Travis Fischer Travis Fischer(Founder of Agentic).

lingua by facebookresearch

0.1%
5k
LLM research codebase for training and inference
created 9 months ago
updated 2 weeks ago
Feedback? Help us improve.