PyTorch implementations of DQN variants
Top 61.0% on sourcepulse
This repository provides PyTorch implementations of Vanilla DQN, Double DQN, and Dueling DQN, targeting researchers and practitioners in deep reinforcement learning. It offers a clear path to experimenting with these foundational algorithms for Atari games, enabling comparative analysis of their performance and stability.
How It Works
The project implements three core Deep Q-Network (DQN) variants. Vanilla DQN uses a convolutional neural network with an experience replay buffer and a separate target network to stabilize training. Double DQN addresses value overestimation by decoupling action selection and value estimation in the Q-target calculation. Dueling DQN further refines the architecture by splitting the network into streams for state-value and advantage estimation, combining them for the final Q-value.
Quick Start & Requirements
pip
(requires PyTorch 0.2.0).python main.py train --task-id $TASK_ID
--gpu
for GPU usage, --double-dqn
and --dueling-dqn
flags.Highlighted Details
Maintenance & Community
No specific information on contributors, sponsorships, or community channels is provided in the README.
Licensing & Compatibility
The README does not explicitly state a license. Compatibility with modern PyTorch versions (beyond 0.2.0) or commercial use is not specified.
Limitations & Caveats
The project relies on outdated dependencies, specifically PyTorch 0.2.0 and Python 2.7, which may pose significant challenges for setup and compatibility with current hardware and software ecosystems. The lack of explicit licensing information also raises concerns for commercial adoption.
7 years ago
Inactive