llm  by rustformers

Rust ecosystem for LLM Rust inference (unmaintained)

created 2 years ago
6,124 stars

Top 8.6% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides an ecosystem of Rust libraries for working with large language models (LLMs), built on the GGML tensor library. It targets developers and end-users seeking efficient, Rust-native LLM inference, offering a CLI for direct interaction and a crate for programmatic use.

How It Works

The core of the project leverages the GGML tensor library, aiming to bring Rust's robustness and ease of use to LLM inference. It supports various model architectures and quantization methods, with an initial focus on CPU inference, though GPU acceleration (CUDA, Metal) was a planned feature.

Quick Start & Requirements

  • Install CLI from source: cargo install --git https://github.com/rustformers/llm llm-cli
  • Project requires Rust v1.65.0+ and a modern C toolchain.
  • GPU support (CUDA, OpenCL, Metal) requires specific build configurations and documentation.
  • Links: Docs.rs, GitHub Releases

Highlighted Details

  • Supports BLOOM, GPT-2, GPT-J, GPT-NeoX (StableLM, RedPajama, Dolly 2.0), LLaMA (Alpaca, Vicuna, Koala, GPT4All, Wizard), and MPT models.
  • CLI offers infer, REPL, chat modes, model serialization, quantization, and perplexity computation.
  • Supports remote fetching of tokenizers from Hugging Face.
  • Bindings available for Python and Node.js.

Maintenance & Community

  • ARCHIVED: The project is unmaintained due to lack of time and resources.
  • Recommendations are provided for active alternatives like Ratchet, Candle-based libraries (mistral.rs, kalosm, candle-transformers), and llama.cpp wrappers.
  • Community contact: Discord.

Licensing & Compatibility

  • The README does not explicitly state a license. The project's nature suggests it would likely be MIT or Apache 2.0, but this requires verification.
  • Compatibility for commercial use is not specified.

Limitations & Caveats

The project is archived and no longer actively maintained. The released version (0.1.1) is significantly out of date. The main and gguf branches are also outdated and do not support GGUF or the latest GGML versions. The develop branch, intended to sync with the latest GGML and support GGUF, was not completed.

Health Check
Last commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
43 stars in the last 90 days

Explore Similar Projects

Starred by Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), and
2 more.

serve by pytorch

0.1%
4k
Serve, optimize, and scale PyTorch models in production
created 5 years ago
updated 3 weeks ago
Starred by Andrej Karpathy Andrej Karpathy(Founder of Eureka Labs; Formerly at Tesla, OpenAI; Author of CS 231n), Nat Friedman Nat Friedman(Former CEO of GitHub), and
32 more.

llama.cpp by ggml-org

0.4%
84k
C/C++ library for local LLM inference
created 2 years ago
updated 14 hours ago
Feedback? Help us improve.