C.O.R.E. (Contextual Observation & Recall Engine) provides a private, portable, and user-owned memory layer for Large Language Models (LLMs). It addresses the need for persistent, traceable context across different AI applications, enabling personalized and auditable interactions. The target audience includes developers and power users seeking to enhance LLM applications with dynamic, temporal knowledge graphs.
How It Works
C.O.R.E. functions as a temporal knowledge graph, storing facts as "Statements" with rich metadata including source, timestamp, and rationale. This contrasts with simpler memory systems by providing full transparency and auditability, allowing users to trace the origin and evolution of information. This dynamic approach facilitates complex queries about changes over time and individual knowledge provenance.
Quick Start & Requirements
.env.example
to .env
, and run docker-compose up
.http://localhost:3000
and use a Magic Link for login.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project is in early stages, with Llama model support being actively developed and currently suboptimal. Features like user-controlled sharing, granular API permissions, and role-based access control are still in progress.
1 day ago
Inactive