Discover and explore top open-source AI tools and projects—updated daily.
AstraBertLocal AI assistant for diverse generative and analytical tasks
Top 100.0% on SourcePulse
Summary
AstraBert/everything-ai provides a comprehensive, locally-run AI assistant framework designed for developers and power users. It integrates diverse AI capabilities, from text and image generation to complex RAG pipelines, enabling offline, customizable AI workflows without relying on cloud services for core inference.
How It Works
This project leverages a multi-container Docker architecture. Core components include llama.cpp for efficient local LLM inference and qdrant for building retrieval-augmented generation (RAG) systems. It dynamically integrates models from Hugging Face Hub and supports external APIs (OpenAI, Anthropic, etc.) for advanced tasks, orchestrating data flow between these services based on user-selected assistant configurations.
Quick Start & Requirements
Setup involves cloning the repository, configuring a .env file to specify volume mounts, model paths, and desired models, then pulling necessary Docker images (astrabert/everything-ai, qdrant, ggerganov/llama.cpp:server). Running docker compose up initiates the multi-container application. Users access the interface via localhost:8670 to select an assistant and localhost:7860 to interact. Prerequisites include Docker and downloaded GGUF models for llama.cpp tasks.
Highlighted Details
llama.cpp, Hugging Face Hub models, and external APIs (OpenAI, Anthropic, Cohere, Groq).llama.cpp, alongside Langfuse integration for observability in customizable chat LLMs.Maintenance & Community
No specific details regarding maintainers, community channels (like Discord/Slack), or project roadmap were found in the provided README content.
Licensing & Compatibility
The README does not explicitly state the project's license type or provide compatibility notes for commercial use.
Limitations & Caveats
Some functionalities, such as image classification and video generation, are explicitly marked as English-only. Protein folding tasks require a GPU. The setup relies heavily on Docker and requires manual configuration of environment variables and model downloads for local inference.
1 year ago
Inactive
unum-cloud
OFA-Sys
huggingface
pliang279
huggingface