Discover and explore top open-source AI tools and projects—updated daily.
santiagomedRust framework for LLM orchestration
Top 92.0% on SourcePulse
Orca is a Rust-based framework for orchestrating Large Language Models (LLMs), targeting developers building portable, efficient, and potentially edge-deployed LLM applications. It aims to simplify LLM pipeline creation and extend LLM capabilities through features like WebAssembly deployment and memory-safe distributed systems.
How It Works
Orca leverages Rust's performance and memory safety for building LLM applications. It supports prompt templating using a Handlebars-like syntax, enabling dynamic prompt generation. The framework facilitates loading various data sources, including HTML from URLs/files and PDFs, and integrates with vector stores like Qdrant. It currently supports OpenAI Chat and offers pipeline execution for sequential LLM calls.
Quick Start & Requirements
Cargo.toml: orca = { git = "https://github.com/scrippt-tech/orca", package = "orca-core" }cargo-make: cargo install cargo-make$ makers build$ makers testHighlighted Details
Maintenance & Community
The project is currently in development, with contributions welcomed via issues or pull requests. Feature additions are encouraged to be discussed via issues first.
Licensing & Compatibility
The repository does not explicitly state a license in the README.
Limitations & Caveats
The framework is in active development, meaning it may contain bugs and has limited functionality. Future directions are still being explored, and suggestions are welcome.
1 year ago
Inactive
not-pizza
ngxson
ggml-org
0xPlaygrounds
dottxt-ai
WebAssembly