RL environment for NetHack game research
Top 39.1% on sourcepulse
The NetHack Learning Environment (NLE) provides a standardized Reinforcement Learning (RL) interface to the classic roguelike game NetHack. It aims to establish NetHack as a challenging benchmark for decision-making and machine learning research, offering a rich, procedurally generated environment that is more computationally efficient than other complex RL testbeds.
How It Works
NLE is a direct fork of NetHack 3.6.6, integrating a language wrapper that translates game observations into text-based representations, allowing for natural language processing approaches. Actions can also be provided as text, which are then converted to NetHack's discrete action space. The environment is designed to be compatible with the OpenAI Gym API, facilitating integration with existing RL frameworks.
Quick Start & Requirements
pip install nle
build-essential
, autoconf
, libtool
, pkg-config
, python3-dev
, python3-pip
, python3-numpy
, git
, flex
, bison
, libbz2-dev
. Specific CMake installation instructions are provided for Ubuntu and macOS.git clone --recursive https://github.com/facebookresearch/nle && pip install -e ".[dev]" && pre-commit install
env = gym.make("NetHackScore-v0"); env.reset(); env.step(1)
Highlighted Details
Maintenance & Community
The project is actively maintained by the original authors from Facebook AI Research and associated institutions. Contributions are welcomed. Links to relevant papers and interviews are provided.
Licensing & Compatibility
The project is licensed under the MIT License, permitting commercial use and integration with closed-source projects.
Limitations & Caveats
The project README indicates that the primary repository has moved to github.com/heiner/nle
. While the core NetHack environment is complex, the RL interface abstracts much of this complexity. Specific performance characteristics or limitations of the RL agent implementations are not detailed in the README.
1 year ago
Inactive