Edge LLM platform for building private AI assistants
Top 76.1% on sourcepulse
HAL-9100 is an edge-focused, full-stack LLM platform designed for building private, fast, and cost-effective AI assistants. It targets developers and organizations requiring high customization, data sensitivity, or offline capabilities, enabling LLMs to interact with the digital world autonomously.
How It Works
The platform leverages Rust for its core, emphasizing an "edge-first" philosophy by supporting local, open-source LLMs without requiring internet access. It integrates features like code interpretation, knowledge retrieval, function calling, and API actions, all designed to work with an OpenAI-compatible API layer. This approach aims to provide a flexible and reliable infrastructure for "Software 3.0," where LLMs perform digital tasks with minimal user intervention.
Quick Start & Requirements
docker compose --profile api -f docker/docker-compose.yml up
.npm i openai
), Docker, and an OpenAI-compatible LLM API endpoint (e.g., Ollama, Anyscale, vLLM).Highlighted Details
Maintenance & Community
The project is in continuous development with a stated aim to refactor. Community support channels are not explicitly listed in the README.
Licensing & Compatibility
The README does not specify a license. Compatibility with commercial or closed-source projects is not detailed.
Limitations & Caveats
The README explicitly states it is "outdated (undergoing large refactor)". While it mentions supporting local LLMs, the quick start example relies on Anyscale API keys, and the primary focus is on OpenAI compatibility rather than fully open-source LLM integration.
1 year ago
1 day