Rust crates for local/remote AI model applications
Top 22.9% on sourcepulse
Floneum provides a Rust-based ecosystem for developing applications with local or remote AI models, targeting developers and researchers. It simplifies interaction with various AI modalities (text, audio, image) and offers a graphical editor for AI workflows, enabling efficient local AI application development.
How It Works
Floneum leverages the candle
machine learning library for pure Rust model execution, supporting quantized and accelerated models. Its core, Kalosm, offers a unified interface for diverse models like Llama, Mistral, and Whisper. A key innovation is structured generation, allowing Rust types with #[derive(Parse, Schema)]
to constrain model output to specific formats (JSON, regex patterns), enhancing data integrity and control.
Quick Start & Requirements
cargo new floneum-app && cd floneum-app
cargo add kalosm --features language
(add metal
, cuda
, or mkl
for acceleration) and cargo add tokio --features full
.cargo run --release
Highlighted Details
llama.cpp
on Metal (M2): 39 t/s vs 27 t/s for Mistral 7b.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
4 days ago
Inactive