Local LLM inference via CLI tool and Node.js API
Top 3.9% on sourcepulse
Dalai provides a simplified interface for running LLaMA and Alpaca large language models locally. It targets developers and power users seeking an easy-to-use, cross-platform solution for local LLM inference, offering a web UI and a JavaScript API.
How It Works
Dalai leverages llama.cpp
and alpaca.cpp
for efficient model execution, abstracting away complex setup. It manages model downloads and provides a Socket.IO-based API for remote interaction, alongside a direct Node.js API for local execution. This approach aims for broad compatibility and ease of use across Linux, macOS, and Windows.
Quick Start & Requirements
docker compose build && docker compose run dalai npx dalai alpaca install 7B && docker compose up -d
cmake
and pkg-config
via Homebrew. Windows requires Visual Studio with C++, Node.js, and Python development workloads, and commands must be run in cmd
.build-essential
, python3-venv
(Debian/Ubuntu) or make
, automake
, gcc
, gcc-c++
, kernel-devel
, python3-virtualenv
(Fedora).Highlighted Details
llama-dl
CDN.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
1 year ago
Inactive