AI platform for StarCraft II, enabling large-scale distributed training
Top 31.6% on sourcepulse
DI-star is a distributed AI training platform for StarCraft II, designed to enable the development of Grandmaster-level agents. It provides tools for supervised learning (SL) from human replays and reinforcement learning (RL) through self-play or against bots, targeting AI researchers and competitive gaming enthusiasts.
How It Works
DI-star employs a modular, distributed architecture for both SL and RL training. For SL, it decodes human replays and trains models, with options for distributed training across multiple processes (coordinator, learner, replay actor). RL training leverages SL models as a starting point and supports self-play or agent-vs-bot configurations, also with distributed capabilities. This approach allows for large-scale data processing and efficient training of complex game AI.
Quick Start & Requirements
SC2PATH
pointing to the StarCraft II installation directory.git clone https://github.com/opendilab/DI-star.git && cd DI-star && pip install -e .
python -m distar.bin.download_model --name rl_model
.python -m distar.bin.play
(human vs agent), python -m distar.bin.play --game_type agent_vs_agent
(agent vs agent), python -m distar.bin.play --game_type agent_vs_bot
(agent vs bot).Highlighted Details
Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project's pre-trained models and testing are tied to specific StarCraft II client versions (4.10.0 recommended), and newer patches may impact performance. GPU and CUDA are strongly recommended for optimal performance, especially for real-time inference.
4 months ago
Inactive