mini-AlphaStar  by liuruoze

AI research project: mini-scale AlphaStar reproduction for StarCraft II

Created 4 years ago
341 stars

Top 81.0% on SourcePulse

GitHubView on GitHub
Project Summary

This project provides a mini-scale reproduction of DeepMind's AlphaStar AI for StarCraft II, targeting researchers and practitioners interested in large-scale reinforcement learning and game AI. It offers a simplified, from-scratch implementation focused on core technologies, enabling training on common server hardware with reduced dependencies.

How It Works

The mini-AlphaStar (mini-AS) is built from scratch, adhering to the "Occam's Razor Principle" by omitting non-essential features for speed and performance. It relies primarily on PyTorch, minimizing external dependencies. The architecture supports supervised learning (SL) from expert replays and subsequent reinforcement learning (RL) for agent improvement, with components for data transformation, SL training, RL evaluation, and multi-agent league training.

Quick Start & Requirements

  • Install: Use conda to create an environment with PyTorch (e.g., conda create -n th_1.5 python=3.7 pytorch=1.5 -c pytorch), activate it (conda activate th_1.5), then pip install -r requirements.txt.
  • Prerequisites: PyTorch >= 1.5, StarCraft II game client, and expert replays.
  • Usage: Run python run.py and uncomment specific sections for replay transformation, SL, RL evaluation, or RL training. Detailed guides are available for replay downloading and usage.
  • Resources: Training requires significant resources, recommended on a commercial server with a GPU, ample memory, and disk space.
  • Links: Codes location, Result video location

Highlighted Details

  • Achieved a win rate of 0.85 against the level-1 bot.
  • Provides pre-trained SL and RL models for easier reproduction.
  • Supports single-GPU and (though less recommended due to instability) multi-GPU training.
  • Offers different data processing pipelines (pickle vs. tensor) impacting speed and disk usage.

Maintenance & Community

The project is a research initiative, with the latest release being v_1.09. Community interaction is encouraged via GitHub issues.

Licensing & Compatibility

The repository is available under an unspecified license. The README does not explicitly state licensing terms or restrictions for commercial use or closed-source linking.

Limitations & Caveats

Training mini-AlphaStar is resource-intensive and not recommended for laptops. While simplified, it still requires significant computational power and disk space, and multi-GPU training can be unstable. The project is a research reproduction, not an official DeepMind product.

Health Check
Last Commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
4 stars in the last 30 days

Explore Similar Projects

Starred by Eric Zhu Eric Zhu(Coauthor of AutoGen; Research Scientist at Microsoft Research) and Will Brown Will Brown(Research Lead at Prime Intellect).

agent-lightning by microsoft

6.0%
2k
Train any AI agent with rollouts and feedback
Created 3 months ago
Updated 2 days ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Mckay Wrigley Mckay Wrigley(Founder of Takeoff AI), and
1 more.

street-fighter-ai by linyiLYi

0.1%
6k
AI agent for Street Fighter II using deep reinforcement learning
Created 2 years ago
Updated 1 year ago
Feedback? Help us improve.