Simple DQN: Deep Q-learning agent for replicating DeepMind's Atari results
Top 49.6% on sourcepulse
This repository provides a simplified implementation of Deep Q-Learning (DQN) for replicating DeepMind's Atari results. It targets researchers and developers interested in understanding or extending DQN, offering a fast, Python-based agent with OpenAI Gym integration and efficient convolution via the Neon library.
How It Works
The agent utilizes the Arcade Learning Environment (ALE) and OpenAI Gym for Atari game interaction. It employs Neon for fast GPU-accelerated convolutions and implements efficient minibatch sampling from replay memory using NumPy array slicing, minimizing data conversions for speed.
Quick Start & Requirements
pip install gym[atari]
.Highlighted Details
Maintenance & Community
The project is presented as a personal learning resource and is noted as outdated, with suggestions to explore more current codebases. No active community channels or recent updates are indicated.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility with commercial or closed-source projects is not specified.
Limitations & Caveats
The repository is explicitly stated as outdated and not recommended for production use. There are known differences in implementation details compared to DeepMind's original paper, such as the RMSProp formulation and frame averaging versus max-pooling. Installation instructions are primarily for Ubuntu.
6 years ago
1 day