Rust framework for LLM orchestration
Top 93.7% on sourcepulse
Orca is a Rust-based framework for orchestrating Large Language Models (LLMs), targeting developers building portable, efficient, and potentially edge-deployed LLM applications. It aims to simplify LLM pipeline creation and extend LLM capabilities through features like WebAssembly deployment and memory-safe distributed systems.
How It Works
Orca leverages Rust's performance and memory safety for building LLM applications. It supports prompt templating using a Handlebars-like syntax, enabling dynamic prompt generation. The framework facilitates loading various data sources, including HTML from URLs/files and PDFs, and integrates with vector stores like Qdrant. It currently supports OpenAI Chat and offers pipeline execution for sequential LLM calls.
Quick Start & Requirements
Cargo.toml
: orca = { git = "https://github.com/scrippt-tech/orca", package = "orca-core" }
cargo-make
: cargo install cargo-make
$ makers build
$ makers test
Highlighted Details
Maintenance & Community
The project is currently in development, with contributions welcomed via issues or pull requests. Feature additions are encouraged to be discussed via issues first.
Licensing & Compatibility
The repository does not explicitly state a license in the README.
Limitations & Caveats
The framework is in active development, meaning it may contain bugs and has limited functionality. Future directions are still being explored, and suggestions are welcome.
1 year ago
1+ week