Deep reinforcement learning framework for fast prototyping
Top 71.5% on sourcepulse
Huskarl is a modular deep reinforcement learning framework designed for rapid prototyping and efficient parallelization of environment interactions. It targets researchers and practitioners working with computationally intensive environments, offering a streamlined way to implement and test various RL algorithms.
How It Works
Built on TensorFlow 2.0 and tf.keras, Huskarl emphasizes modularity for easy algorithm and agent customization. Its core advantage lies in its ability to parallelize environment dynamics computation across multiple CPU cores, significantly accelerating on-policy learning algorithms like A2C and PPO, especially in demanding simulations.
Quick Start & Requirements
git clone https://github.com/danaugrs/huskarl.git && cd huskarl && pip install -e .
pip install huskarl
matplotlib
, gym
.matplotlib
and gym
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The framework is still under active development, with several key algorithms like PPO not yet implemented. The README does not specify a license, which could impact commercial adoption. Community support channels and detailed documentation beyond the README are not readily apparent.
2 years ago
1 week