hal-9100  by llm-edge

Edge LLM platform for building private AI assistants

created 1 year ago
380 stars

Top 76.1% on sourcepulse

GitHubView on GitHub
Project Summary

HAL-9100 is an edge-focused, full-stack LLM platform designed for building private, fast, and cost-effective AI assistants. It targets developers and organizations requiring high customization, data sensitivity, or offline capabilities, enabling LLMs to interact with the digital world autonomously.

How It Works

The platform leverages Rust for its core, emphasizing an "edge-first" philosophy by supporting local, open-source LLMs without requiring internet access. It integrates features like code interpretation, knowledge retrieval, function calling, and API actions, all designed to work with an OpenAI-compatible API layer. This approach aims to provide a flexible and reliable infrastructure for "Software 3.0," where LLMs perform digital tasks with minimal user intervention.

Quick Start & Requirements

  • Install: Clone the repository and use docker compose --profile api -f docker/docker-compose.yml up.
  • Prerequisites: OpenAI SDK (npm i openai), Docker, and an OpenAI-compatible LLM API endpoint (e.g., Ollama, Anyscale, vLLM).
  • Setup: Can be initiated via GitHub Codespaces or local setup.
  • Docs: https://github.com/llm-edge/hal-9100

Highlighted Details

  • Supports local LLMs, enabling 100% privacy and offline operation.
  • Claims significant cost reduction and speed improvements over cloud-based solutions.
  • Features autonomous code interpretation (Python), knowledge retrieval, function calling, and API actions.
  • Offers an OpenAI-compatible API for flexible LLM integration.

Maintenance & Community

The project is in continuous development with a stated aim to refactor. Community support channels are not explicitly listed in the README.

Licensing & Compatibility

The README does not specify a license. Compatibility with commercial or closed-source projects is not detailed.

Limitations & Caveats

The README explicitly states it is "outdated (undergoing large refactor)". While it mentions supporting local LLMs, the quick start example relies on Anyscale API keys, and the primary focus is on OpenAI compatibility rather than fully open-source LLM integration.

Health Check
Last commit

1 year ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.