Discover and explore top open-source AI tools and projects—updated daily.
chrishayukThe model is the database: query neural network weights directly
New!
Top 71.4% on SourcePulse
Summary
LARQL enables direct querying and manipulation of transformer neural network weights by decompiling models into a queryable vindex format and using the Lazarus Query Language (LQL). This approach treats model knowledge as a graph database, allowing users to browse, edit, and recompile weights without traditional fine-tuning or GPU requirements for basic operations, targeting researchers and power users.
How It Works
LARQL's core concept is "the model IS the database." It decompiles transformer weights into a vindex, where gate vectors become KNN, embeddings token lookups, and down projections edge labels. LQL provides a SQL-like interface for querying, browsing, and mutating this vindex, offering direct interaction with learned knowledge.
Quick Start & Requirements
cargo build --release.larql extract-index <model> -o <vindex> [--f16] [--level <browse|inference|all>]. Browse-only vindexes are ~3 GB (f16), inference-enabled ~6 GB (f16).--features metal). No GPU needed for browse/query.docs/lql-guide.md.Highlighted Details
.vlp files for incremental, read-only knowledge edits.Maintenance & Community
No explicit details on maintainers, community channels, or roadmap were found in the provided README.
Licensing & Compatibility
Limitations & Caveats
DESCRIBE/WALK results; INFER is recommended.1 day ago
Inactive
featureform
THUDM
modelscope
huggingface
llmware-ai