mini-AlphaStar  by liuruoze

AI research project: mini-scale AlphaStar reproduction for StarCraft II

created 4 years ago
337 stars

Top 82.8% on sourcepulse

GitHubView on GitHub
Project Summary

This project provides a mini-scale reproduction of DeepMind's AlphaStar AI for StarCraft II, targeting researchers and practitioners interested in large-scale reinforcement learning and game AI. It offers a simplified, from-scratch implementation focused on core technologies, enabling training on common server hardware with reduced dependencies.

How It Works

The mini-AlphaStar (mini-AS) is built from scratch, adhering to the "Occam's Razor Principle" by omitting non-essential features for speed and performance. It relies primarily on PyTorch, minimizing external dependencies. The architecture supports supervised learning (SL) from expert replays and subsequent reinforcement learning (RL) for agent improvement, with components for data transformation, SL training, RL evaluation, and multi-agent league training.

Quick Start & Requirements

  • Install: Use conda to create an environment with PyTorch (e.g., conda create -n th_1.5 python=3.7 pytorch=1.5 -c pytorch), activate it (conda activate th_1.5), then pip install -r requirements.txt.
  • Prerequisites: PyTorch >= 1.5, StarCraft II game client, and expert replays.
  • Usage: Run python run.py and uncomment specific sections for replay transformation, SL, RL evaluation, or RL training. Detailed guides are available for replay downloading and usage.
  • Resources: Training requires significant resources, recommended on a commercial server with a GPU, ample memory, and disk space.
  • Links: Codes location, Result video location

Highlighted Details

  • Achieved a win rate of 0.85 against the level-1 bot.
  • Provides pre-trained SL and RL models for easier reproduction.
  • Supports single-GPU and (though less recommended due to instability) multi-GPU training.
  • Offers different data processing pipelines (pickle vs. tensor) impacting speed and disk usage.

Maintenance & Community

The project is a research initiative, with the latest release being v_1.09. Community interaction is encouraged via GitHub issues.

Licensing & Compatibility

The repository is available under an unspecified license. The README does not explicitly state licensing terms or restrictions for commercial use or closed-source linking.

Limitations & Caveats

Training mini-AlphaStar is resource-intensive and not recommended for laptops. While simplified, it still requires significant computational power and disk space, and multi-GPU training can be unstable. The project is a research reproduction, not an official DeepMind product.

Health Check
Last commit

2 years ago

Responsiveness

1 day

Pull Requests (30d)
0
Issues (30d)
0
Star History
7 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.