Python sandbox for decision making in dynamics
Top 70.8% on sourcepulse
WhyNot is a Python sandbox for developing, testing, and benchmarking causal inference and sequential decision-making tools within dynamic environments. It targets researchers, practitioners, and students seeking a flexible platform to explore and evaluate methods in complex, simulated scenarios. The primary benefit is providing a unified interface to diverse simulators and experimental designs, facilitating robust evaluation and pedagogical use.
How It Works
WhyNot integrates causal inference and reinforcement learning techniques with dynamic simulators. It offers a structured approach to generating datasets for causal analysis, including randomized control trials, confounding, and mediation scenarios. For sequential decision-making, it leverages the OpenAI Gym interface, enabling experimentation with reinforcement learning agents in simulated environments. This dual focus allows for comprehensive evaluation of methods across different problem types.
Quick Start & Requirements
pip install whynot
whynot_estimators
requires R.Highlighted Details
Maintenance & Community
The project is under active development, with contributions welcomed via GitHub issues for bugs and feature requests. The README does not specify community channels like Discord or Slack.
Licensing & Compatibility
MIT License. This permissive license allows for commercial use and integration into closed-source projects.
Limitations & Caveats
The simulators are designed for technical challenges in causal inference and dynamic decision-making, not as faithful models of the real world, and should not be used for direct policy design. The project is still under active development, implying potential for breaking changes or incomplete features.
1 year ago
Inactive