LLM assistant with permanent memory
Top 40.8% on sourcepulse
Sebastian is an LLM assistant designed to overcome the context limitations of traditional chatbots by providing permanent memory. It targets users who need a conversational AI that can recall past interactions and evolving user preferences across extended periods, offering a more personalized and context-aware experience akin to a digital Jarvis.
How It Works
Sebastian employs Retrieval Augmented Generation (RAG) and embeddings to manage memory, rather than relying on context window tokens. This approach allows it to store and retrieve information indefinitely, enabling features like automatic memory discovery, iteration (updating stored information), association (linking related facts), and consolidation (periodically reviewing and optimizing stored knowledge).
Quick Start & Requirements
docker-compose.yml
to set LANGUAGE
, DASHSCOPE_API_KEY
(for Chinese) or OPENAI_API_KEY
(for English), and a TOKEN
. Run docker compose up -d
.curl
commands to interact via text or audio endpoints.Highlighted Details
Maintenance & Community
The project welcomes contributions via issues and pull requests. Sponsorship is available via Afdian.net. Acknowledgments are given to Gitee AI, Alibaba Cloud, and JetBrains.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.
Limitations & Caveats
Currently relies on cloud-based LLM inference services (Qwen2-Max, GPT-4o), requiring API keys and incurring potential costs. Local Ollama deployment is planned but not yet available. The specific license is not mentioned, which may impact commercial adoption.
5 months ago
Inactive