AI research project: mini-scale AlphaStar reproduction for StarCraft II
Top 82.8% on sourcepulse
This project provides a mini-scale reproduction of DeepMind's AlphaStar AI for StarCraft II, targeting researchers and practitioners interested in large-scale reinforcement learning and game AI. It offers a simplified, from-scratch implementation focused on core technologies, enabling training on common server hardware with reduced dependencies.
How It Works
The mini-AlphaStar (mini-AS) is built from scratch, adhering to the "Occam's Razor Principle" by omitting non-essential features for speed and performance. It relies primarily on PyTorch, minimizing external dependencies. The architecture supports supervised learning (SL) from expert replays and subsequent reinforcement learning (RL) for agent improvement, with components for data transformation, SL training, RL evaluation, and multi-agent league training.
Quick Start & Requirements
conda
to create an environment with PyTorch (e.g., conda create -n th_1.5 python=3.7 pytorch=1.5 -c pytorch
), activate it (conda activate th_1.5
), then pip install -r requirements.txt
.python run.py
and uncomment specific sections for replay transformation, SL, RL evaluation, or RL training. Detailed guides are available for replay downloading and usage.Highlighted Details
Maintenance & Community
The project is a research initiative, with the latest release being v_1.09. Community interaction is encouraged via GitHub issues.
Licensing & Compatibility
The repository is available under an unspecified license. The README does not explicitly state licensing terms or restrictions for commercial use or closed-source linking.
Limitations & Caveats
Training mini-AlphaStar is resource-intensive and not recommended for laptops. While simplified, it still requires significant computational power and disk space, and multi-GPU training can be unstable. The project is a research reproduction, not an official DeepMind product.
2 years ago
1 day