lm.rs  by samuel-vitorino

Minimal LLM inference in Rust

created 1 year ago
1,010 stars

Top 37.7% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

This project provides a minimal, CPU-bound inference engine for large language models (LLMs) written in Rust. It targets developers and researchers seeking to run LLMs locally without heavy ML dependencies, offering support for Gemma, Llama 3.2, and PHI-3.5 (including multimodal capabilities) with quantized models for improved performance.

How It Works

The engine implements LLM inference directly in Rust, avoiding external ML libraries like PyTorch or TensorFlow. It leverages custom model conversion scripts to transform Hugging Face models into its own .lmrs format, supporting various quantization levels (e.g., Q8_0, Q4_0) for reduced memory footprint and faster inference. The core design prioritizes minimal dependencies and direct CPU execution, inspired by projects like llama2.c.

Quick Start & Requirements

  • Install Python dependencies: pip install -r requirements.txt
  • Convert models using python export.py and python tokenizer.py.
  • Compile Rust code: RUSTFLAGS="-C target-cpu=native" cargo build --release [--features multimodal]
  • Run inference: ./target/release/chat --model [model weights file]
  • WebUI backend: Compile with --features backend and run ./target/release/backend.
  • Requires Hugging Face model files (.safetensors, config.json, CLIP config for vision).

Highlighted Details

  • Supports Gemma 2, Llama 3.2, and PHI-3.5 (text and vision) models.
  • Achieves up to 50 tok/s on a 16-core AMD Epyc for Llama 3.2 1B Q8_0.
  • Implemented batch processing for up to 3x faster image encoding.
  • Offers quantization support (int8, int4) for reduced model size.

Maintenance & Community

  • Project is primarily maintained by a single author, with a disclaimer about code optimization.
  • Links to a WebUI, Hugging Face collection, and demo videos are provided.

Licensing & Compatibility

  • MIT License.
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

The project is presented as an experimental learning exercise by the author, with some code potentially requiring optimization. Support for larger models (e.g., 27B) is noted as too slow for practical use on the author's hardware.

Health Check
Last commit

9 months ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
29 stars in the last 90 days

Explore Similar Projects

Starred by Tobi Lutke Tobi Lutke(Cofounder of Shopify), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
10 more.

qlora by artidoro

0.2%
11k
Finetuning tool for quantized LLMs
created 2 years ago
updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems) and Jiayi Pan Jiayi Pan(Author of SWE-Gym; AI Researcher at UC Berkeley).

DeepSeek-Coder-V2 by deepseek-ai

0.4%
6k
Open-source code language model comparable to GPT4-Turbo
created 1 year ago
updated 10 months ago
Feedback? Help us improve.