Discover and explore top open-source AI tools and projects—updated daily.
Deep learning framework for symbolic optimization tasks
Top 49.9% on SourcePulse
Deep Symbolic Optimization (DSO) is a deep learning framework for symbolic optimization tasks, primarily symbolic regression and discovering symbolic policies for reinforcement learning. It targets researchers and practitioners seeking to recover mathematical expressions from data or learn control policies, offering state-of-the-art performance on symbolic regression benchmarks and a flexible architecture for custom tasks.
How It Works
DSO employs a deep reinforcement learning approach, framing symbolic optimization as a sequential decision-making problem. It uses a policy gradient method to learn a sequence of operations and operands that form a symbolic expression. The framework supports various policy optimizers, including risk-seeking policy gradients and Proximal Policy Optimization (PPO), allowing for efficient exploration and optimization of the symbolic search space.
Quick Start & Requirements
pip install -e ./dso
pip install -e ./dso[control]
pip install -e ./dso[all]
python -m dso.run path/to/config.json
from dso import DeepSymbolicOptimizer; model = DeepSymbolicOptimizer("path/to/config.json"); model.train()
Highlighted Details
Maintenance & Community
The project is associated with multiple publications from ICLR, ICML, and NeurIPS, indicating active research. No specific community channels (Discord/Slack) or active maintainer information are provided in the README.
Licensing & Compatibility
The README does not explicitly state a license. However, the project is released under LLNL-CODE-647188, which typically implies a permissive, non-commercial license from Lawrence Livermore National Laboratory. Commercial use or linking with closed-source projects would require careful review of the specific LLNL license terms.
Limitations & Caveats
The README mentions that constant optimization significantly increases runtime. For multi-dimensional control tasks, a pre-trained "anchor" policy is required, adding a dependency to the workflow. The project appears to be research-oriented, and long-term maintenance or support is not detailed.
8 months ago
Inactive