deep-symbolic-optimization  by dso-org

Deep learning framework for symbolic optimization tasks

Created 5 years ago
677 stars

Top 49.9% on SourcePulse

GitHubView on GitHub
Project Summary

Deep Symbolic Optimization (DSO) is a deep learning framework for symbolic optimization tasks, primarily symbolic regression and discovering symbolic policies for reinforcement learning. It targets researchers and practitioners seeking to recover mathematical expressions from data or learn control policies, offering state-of-the-art performance on symbolic regression benchmarks and a flexible architecture for custom tasks.

How It Works

DSO employs a deep reinforcement learning approach, framing symbolic optimization as a sequential decision-making problem. It uses a policy gradient method to learn a sequence of operations and operands that form a symbolic expression. The framework supports various policy optimizers, including risk-seeking policy gradients and Proximal Policy Optimization (PPO), allowing for efficient exploration and optimization of the symbolic search space.

Quick Start & Requirements

  • Installation:
    • Core package: pip install -e ./dso
    • With control task: pip install -e ./dso[control]
    • All tasks: pip install -e ./dso[all]
  • Prerequisites: Python 3.6+ on Unix/OSX. NumPy is required, with a specific CFLAGS export needed on Mac.
  • Running:
    • CLI: python -m dso.run path/to/config.json
    • Python: from dso import DeepSymbolicOptimizer; model = DeepSymbolicOptimizer("path/to/config.json"); model.train()
  • Configuration: Runs are configured via JSON files specifying tasks, hyperparameters, and function sets.
  • Resources: Symbolic regression can take minutes; constant optimization hours. Control tasks may require pre-trained anchor policies.
  • Docs: Official Documentation (Note: Link is inferred from common practice, not explicitly in README)

Highlighted Details

  • State-of-the-art results on SRBench symbolic regression benchmarks, winning the 2022 SRBench competition.
  • Supports learning symbolic policies for reinforcement learning environments, including multi-dimensional action spaces.
  • Introduces the "LINEAR" (poly) token for efficient polynomial optimization within symbolic regression.
  • Offers an sklearn-like interface for easy integration with custom datasets.

Maintenance & Community

The project is associated with multiple publications from ICLR, ICML, and NeurIPS, indicating active research. No specific community channels (Discord/Slack) or active maintainer information are provided in the README.

Licensing & Compatibility

The README does not explicitly state a license. However, the project is released under LLNL-CODE-647188, which typically implies a permissive, non-commercial license from Lawrence Livermore National Laboratory. Commercial use or linking with closed-source projects would require careful review of the specific LLNL license terms.

Limitations & Caveats

The README mentions that constant optimization significantly increases runtime. For multi-dimensional control tasks, a pre-trained "anchor" policy is required, adding a dependency to the workflow. The project appears to be research-oriented, and long-term maintenance or support is not detailed.

Health Check
Last Commit

8 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Starred by Philipp Moritz Philipp Moritz(Cofounder of Anyscale), Jason Knight Jason Knight(Director AI Compilers at NVIDIA; Cofounder of OctoML), and
1 more.

ARS by modestyachts

0.2%
425
Reinforcement learning via augmented random search
Created 7 years ago
Updated 4 years ago
Feedback? Help us improve.