DI-star  by opendilab

AI platform for StarCraft II, enabling large-scale distributed training

created 4 years ago
1,286 stars

Top 31.6% on sourcepulse

GitHubView on GitHub
Project Summary

DI-star is a distributed AI training platform for StarCraft II, designed to enable the development of Grandmaster-level agents. It provides tools for supervised learning (SL) from human replays and reinforcement learning (RL) through self-play or against bots, targeting AI researchers and competitive gaming enthusiasts.

How It Works

DI-star employs a modular, distributed architecture for both SL and RL training. For SL, it decodes human replays and trains models, with options for distributed training across multiple processes (coordinator, learner, replay actor). RL training leverages SL models as a starting point and supports self-play or agent-vs-bot configurations, also with distributed capabilities. This approach allows for large-scale data processing and efficient training of complex game AI.

Quick Start & Requirements

  • Install StarCraft II: Download the retail version.
  • Set Environment Variable: SC2PATH pointing to the StarCraft II installation directory.
  • Install DI-star: git clone https://github.com/opendilab/DI-star.git && cd DI-star && pip install -e .
  • Install PyTorch: Version 1.7.1 with CUDA is recommended.
  • Prerequisites: Python 3.6-3.8, StarCraft II client (version 4.10.0 recommended for pre-trained models). GPU and CUDA are necessary for real-time agent testing performance.
  • Pre-trained Models: Download via python -m distar.bin.download_model --name rl_model.
  • Play Demo: python -m distar.bin.play (human vs agent), python -m distar.bin.play --game_type agent_vs_agent (agent vs agent), python -m distar.bin.play --game_type agent_vs_bot (agent vs bot).
  • Docs: https://github.com/opendilab/DI-star

Highlighted Details

  • Trained Grandmaster-level agents.
  • Supports both Supervised Learning (SL) and Reinforcement Learning (RL).
  • Offers pre-trained agents (Zerg vs Zerg) for SL (Diamond level) and RL (Master/Grandmaster level).
  • Includes specific RL models like Abathur (Mutalisk), Brakk (Lingbane rush), Dehaka (Roach Ravager), and Zagara (Roach rush).
  • Provides guidance for training with limited resources.
  • Agents have competed against professional players (e.g., Harstem).

Maintenance & Community

  • Active development with recent updates noted in the README (as of Jan-Apr 2022).
  • Community links provided for Slack and Discord.
  • Citation available in LaTeX format.

Licensing & Compatibility

  • Released under the Apache 2.0 license.
  • Permissive license suitable for commercial use and integration with closed-source projects.

Limitations & Caveats

The project's pre-trained models and testing are tied to specific StarCraft II client versions (4.10.0 recommended), and newer patches may impact performance. GPU and CUDA are strongly recommended for optimal performance, especially for real-time inference.

Health Check
Last commit

4 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
19 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.