mixtral-offloading  by dvmazur

Inference optimization for Mixtral-8x7B models

created 1 year ago
2,317 stars

Top 20.1% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project enables efficient inference of Mixtral-8x7B models on consumer hardware like Colab or desktops by offloading model experts between GPU and CPU memory. It targets researchers and developers working with large language models who need to run them on resource-constrained environments.

How It Works

The core approach combines mixed quantization using HQQ and a Mixture-of-Experts (MoE) offloading strategy. Different quantization schemes are applied to attention layers and experts to minimize memory footprint. Experts are offloaded individually and brought back to the GPU only when required, utilizing an LRU cache for active experts to reduce GPU-CPU communication during activation computation.

Quick Start & Requirements

  • Run the demo notebook: ./notebooks/demo.ipynb
  • No command-line script is currently available; the demo notebook serves as a reference for local setup.
  • Requires Python and relevant ML libraries (specific versions not detailed in README).

Highlighted Details

  • Efficient inference of Mixtral-8x7B on consumer hardware.
  • Combines HQQ quantization with MoE offloading.
  • Utilizes an LRU cache for active experts to optimize GPU-CPU communication.

Maintenance & Community

  • Actively under development with plans to add support for more quantization methods and speculative expert prefetching.
  • Contributions are welcomed.

Licensing & Compatibility

  • License type not specified in the README.
  • Compatibility for commercial or closed-source use is not detailed.

Limitations & Caveats

Some techniques described in the project's technical report are not yet implemented in the repository. The project is a work in progress, and a command-line interface for local execution is not yet available.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
1
Issues (30d)
0
Star History
15 stars in the last 90 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jaret Burkett Jaret Burkett(Founder of Ostris), and
1 more.

nunchaku by nunchaku-tech

2.1%
3k
High-performance 4-bit diffusion model inference engine
created 9 months ago
updated 1 day ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
2 more.

gemma_pytorch by google

0.1%
6k
PyTorch implementation for Google's Gemma models
created 1 year ago
updated 2 months ago
Feedback? Help us improve.