everything-ai  by AstraBert

Local AI assistant for diverse generative and analytical tasks

Created 1 year ago
250 stars

Top 100.0% on SourcePulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

Summary

AstraBert/everything-ai provides a comprehensive, locally-run AI assistant framework designed for developers and power users. It integrates diverse AI capabilities, from text and image generation to complex RAG pipelines, enabling offline, customizable AI workflows without relying on cloud services for core inference.

How It Works

This project leverages a multi-container Docker architecture. Core components include llama.cpp for efficient local LLM inference and qdrant for building retrieval-augmented generation (RAG) systems. It dynamically integrates models from Hugging Face Hub and supports external APIs (OpenAI, Anthropic, etc.) for advanced tasks, orchestrating data flow between these services based on user-selected assistant configurations.

Quick Start & Requirements

Setup involves cloning the repository, configuring a .env file to specify volume mounts, model paths, and desired models, then pulling necessary Docker images (astrabert/everything-ai, qdrant, ggerganov/llama.cpp:server). Running docker compose up initiates the multi-container application. Users access the interface via localhost:8670 to select an assistant and localhost:7860 to interact. Prerequisites include Docker and downloaded GGUF models for llama.cpp tasks.

Highlighted Details

  • Supports a wide array of AI tasks: retrieval-augmented text generation, agnostic text generation, summarization, image generation (Stable Diffusion, Pollinations), image classification, image-to-text, audio classification, speech recognition, video generation, and protein folding.
  • Offers flexible LLM integration, including local llama.cpp, Hugging Face Hub models, and external APIs (OpenAI, Anthropic, Cohere, Groq).
  • Features advanced RAG capabilities with Qdrant and llama.cpp, alongside Langfuse integration for observability in customizable chat LLMs.
  • Includes specialized modules like autotrain for model fine-tuning and image retrieval search.

Maintenance & Community

No specific details regarding maintainers, community channels (like Discord/Slack), or project roadmap were found in the provided README content.

Licensing & Compatibility

The README does not explicitly state the project's license type or provide compatibility notes for commercial use.

Limitations & Caveats

Some functionalities, such as image classification and video generation, are explicitly marked as English-only. Protein folding tasks require a GPU. The setup relies heavily on Docker and requires manual configuration of environment variables and model downloads for local inference.

Health Check
Last Commit

1 year ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Feedback? Help us improve.