biniou  by Woolverine94

Self-hosted web UI for generative AI multimedia content creation and chatbot use

Created 2 years ago
615 stars

Top 53.5% on SourcePulse

GitHubView on GitHub
Project Summary

Biniou is a self-hosted web UI for over 30 generative AI models, enabling users to create multimedia content and use chatbots on their own hardware, even with limited resources like 8GB RAM. It supports offline use after initial deployment and model downloads, targeting users who want a comprehensive, local generative AI suite.

How It Works

Biniou integrates various AI models via Hugging Face libraries and Gradio for the web UI. It supports CPU-only operation for broad compatibility but offers optional CUDA and ROCm acceleration for NVIDIA and AMD GPUs, respectively. The architecture allows modules to pass outputs as inputs to others, facilitating complex workflows. It leverages optimized libraries like llama-cpp-python for efficient GGUF model inference.

Quick Start & Requirements

  • Installation: One-click installers for Linux distributions (OpenSUSE, RHEL-based, Arch-based, Debian-based), a Windows installer (install_win.cmd), and a macOS Homebrew install are provided. Docker images (CPU and CUDA) are also available.
  • Prerequisites: Python 3.10 or 3.11 (not 3.11+), Git, pip, venv, GCC, Perl, Make, FFmpeg, OpenSSL. Windows requires additional build tools. Minimum 8GB RAM (16GB+ recommended), 20GB+ disk space (200GB+ for default models). AMD64 architecture required.
  • Documentation: Documentation and Video Presentation.

Highlighted Details

  • Supports over 30 generative AI models across text, image, audio, and video generation.
  • Features include chatbots (GGUF, Llava), image generation (SD, Kandinsky, LCM), audio synthesis (MusicGen, Bark), and video generation (Modelscope, AnimateDiff).
  • Offers advanced image manipulation like ControlNet, IP-Adapter, and face swapping (Insight Face).
  • Includes a user-friendly control panel for updates, restarts, and network sharing.
  • Supports various model formats including GGUF, safetensors, and LoRAs.

Maintenance & Community

The project has weekly updates, indicating active development. The README links to a video presentation but no explicit community channels (Discord/Slack) or roadmap are listed.

Licensing & Compatibility

Licensed under GNU General Public License v3.0 (GPL-3.0). This is a strong copyleft license, requiring derivative works to also be open-sourced under GPL-3.0. Model licenses vary and must be checked individually.

Limitations & Caveats

macOS support is experimental and currently incompatible with Apple Silicon (workaround via OrbStack mentioned). Windows installation involves significant system changes and recommends backups. The project is described as being in an "early stage of development," with many underlying open-source components also experimental. Insufficient RAM is a common cause of crashes.

Health Check
Last Commit

1 day ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
2
Star History
12 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.