deer  by VinF

Deep reinforcement learning framework

created 9 years ago
488 stars

Top 64.0% on sourcepulse

GitHubView on GitHub
1 Expert Loves This Project
Project Summary

DeeR is a modular Python library for Deep Reinforcement Learning, designed for easy adaptation to various needs. It offers implementations of advanced RL algorithms like Double Q-learning, prioritized Experience Replay, DDPG, and CRAR, targeting researchers and practitioners in RL.

How It Works

DeeR is built with a focus on modularity, allowing users to easily swap components and experiment with different RL algorithms and configurations. It leverages Keras for its deep learning backend, providing a flexible foundation for building and training RL agents.

Quick Start & Requirements

  • Install via pip: pip install deer
  • Dependencies: NumPy >= 1.10, joblib >= 0.9, Keras >= 2.6. Matplotlib >= 1.1.1 for examples. ALE >= 0.4 for Atari environments.
  • Tested with Python 3.6.
  • Full documentation: http://deer.readthedocs.io/

Highlighted Details

  • Implements Double Q-learning, prioritized Experience Replay, DDPG, and CRAR.
  • Includes various environment examples, some using OpenAI gym.
  • Modular design for easy adaptation and experimentation.

Maintenance & Community

No specific information on contributors, sponsorships, or community channels is provided in the README.

Licensing & Compatibility

  • License: MIT
  • Compatible with commercial use and closed-source linking.

Limitations & Caveats

The project is tested with Python 3.6, and compatibility with newer Python versions is not explicitly stated. The README also indicates support for Python 2.7 via a badge, which may indicate a potential maintenance gap or outdated information.

Health Check
Last commit

1 month ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.