Multi-agent pathfinding via distributed RL/IL
Top 74.9% on sourcepulse
PRIMAL provides a distributed reinforcement and imitation learning framework for training multiple agents to collaboratively plan paths in 2D grid environments. It is designed for researchers and practitioners in multi-agent pathfinding (MAPF) and reinforcement learning, offering a solution for complex coordination challenges.
How It Works
The core of PRIMAL utilizes a distributed Actor-Critic approach, specifically A3C (Asynchronous Advantage Actor-Critic), adapted for multi-agent scenarios. It trains agents to learn policies that minimize collisions and reach goals efficiently. The framework includes a custom OpenAI Gym environment for MAPF, allowing for flexible scenario definition and agent interaction.
Quick Start & Requirements
pip install -r requirements.txt
.cpp_mstar
requires compilation within the od_mstar3
directory.Highlighted Details
mapf_gym.py
) for MAPF.mapgenerator.py
) and systematic testing (primal_testing.py
).Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The project relies on older versions of Python (3.4) and Tensorflow (1.3.1), which may pose compatibility challenges with modern systems and libraries. The lack of explicit licensing information could be a concern for commercial adoption.
1 year ago
Inactive