StarCraft II reinforcement learning examples
Top 46.9% on sourcepulse
This repository provides example implementations for deep reinforcement learning agents within the StarCraft II environment using the PySC2 library. It targets researchers and developers looking to experiment with RL algorithms in a complex, real-time strategy game setting, offering a practical starting point for building and training agents.
How It Works
The project leverages the PySC2 API to interact with StarCraft II, enabling agents to observe game states and execute actions. It integrates with the OpenAI Baselines library for common RL algorithm implementations, specifically demonstrating Deep Q-Networks (DQN) and Advantage Actor-Critic (A2C). This combination allows for structured experimentation with established RL techniques in a challenging domain.
Quick Start & Requirements
pip install git+https://github.com/deepmind/pysc2
pip install git+https://github.com/openai/baselines
StarcraftII/Maps/
.python train_mineral_shards.py --algorithm=a2c
Highlighted Details
CollectMineralShards
mini-game.Maintenance & Community
This repository appears to be a personal project with no explicit mention of active maintenance, contributors, or community channels in the provided README.
Licensing & Compatibility
The README does not explicitly state a license for this repository. It relies on dependencies with their own licenses: PySC2 (Apache 2.0), Baselines (MIT), s2client-proto (Blizzard, likely proprietary), and TensorFlow 1.3 (Apache 2.0). Compatibility for commercial use would depend on the licensing of StarCraft II itself and any unstated license for this code.
Limitations & Caveats
The project is built on TensorFlow 1.3, which is significantly outdated and unsupported. The README does not mention support for newer RL algorithms or PySC2 features, and the lack of community or maintenance signals suggest potential abandonment.
4 years ago
1 week