Reasoning AI via differentiable lambda calculus
Top 96.9% on sourcepulse
This project provides a fully differentiable implementation of lambda calculus and other neurosymbolic data structures (Stacks, Queues, Trees, etc.) designed to imbue AI models, particularly LLMs, with reasoning capabilities. It targets researchers and engineers aiming to bridge the gap between connectionist AI and symbolic computation, offering a path to models that can infer knowledge beyond their training data.
How It Works
The core innovation is representing symbolic programs and data structures as tensors within a neural network architecture. Lambda calculus operations, like beta reduction, are implemented through differentiable tensor manipulations, enabling end-to-end training with gradient descent. This approach allows AI models to potentially "compile" and execute programs directly within their latent space, moving beyond pattern matching to principled reasoning.
Quick Start & Requirements
pip install neurallambda
(or clone the repo for research-grade access).demo/d01_neurallambda.py
script provides an example of differentiable lambda calculus execution.Highlighted Details
Maintenance & Community
The project is actively under development by a single primary contributor. Collaboration is encouraged via GitHub Issues and direct contact. A roadmap is available in TODO.md
.
Licensing & Compatibility
The project is currently unlicensed, with all rights retained by the author. The author intends to open-source the work but has not yet specified a license. Commercial use and closed-source linking are not explicitly permitted or restricted at this time.
Limitations & Caveats
The library is research-grade and under active construction (V2 is in progress). Variable scoping in the lambda calculus implementation is not yet robust, leading to potential issues with recursion. The current implementation of substitution may be "heavy-handed" compared to theoretical ideals.
9 months ago
Inactive