Gym environment for single/multi-agent RL research
Top 46.7% on sourcepulse
This repository provides a simple, lightweight OpenAI Gym environment for Slime Volleyball, targeting reinforcement learning researchers and practitioners. It facilitates testing single-agent, multi-agent, and self-play RL algorithms with both state-space and pixel-based observations, offering a fast iteration loop for developing and evaluating agents.
How It Works
The environment simulates a 2D physics-based volleyball game where agents aim to score by grounding the ball on the opponent's side. It offers state-space observations (12-dimensional vector) and pixel observations (84x168x3 RGB frames), mimicking Atari environments. The core advantage lies in its minimal dependencies (Gym, NumPy) and efficient implementation, allowing rapid experimentation and straightforward integration with standard RL algorithms and multi-agent setups.
Quick Start & Requirements
pip install slimevolleygym
git clone https://github.com/hardmaru/slimevolleygym.git && cd slimevolleygym && pip install -e .
python test_state.py
python test_pixel.py
gym
, numpy
. Pixel version may require pyglet
(tested with <0.15.7).Highlighted Details
MultiBinary(3)
and Discrete(6)
action spaces for different RL frameworks.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The environment's API was developed for older Gym versions (0.19.0 or earlier), and compatibility with newer versions is not guaranteed due to potential API-breaking changes. The pyglet
dependency for pixel rendering was tested with versions prior to 0.15.7.
1 year ago
Inactive