Discover and explore top open-source AI tools and projects—updated daily.
Hierarchical Actor-Critic (HAC) algorithm implementation
Top 97.5% on SourcePulse
This repository implements the Hierarchical Actor-Critic (HAC) algorithm, designed to accelerate reinforcement learning for agents by enabling them to decompose complex tasks into simpler, sequential sub-goals. It is targeted at researchers and practitioners in reinforcement learning seeking to improve sample efficiency and learning speed in complex robotic manipulation and control tasks.
How It Works
HAC employs a hierarchical reinforcement learning approach, where multiple actor-critic agents are stacked in a hierarchy. Higher-level agents set goals for lower-level agents, which then learn policies to achieve those goals. This structured decomposition allows for more efficient exploration and faster learning compared to traditional flat RL methods. The implementation features bounded Q-values for improved critic stability and offers the option to disable target networks, a deviation from standard DDPG, which has shown performance benefits.
Quick Start & Requirements
python3 initialize_HAC.py --retrain
Highlighted Details
design_agent_and_env.py
) for configuring agent and environment hyperparameters.Maintenance & Community
Licensing & Compatibility
Limitations & Caveats
The code for ant domains is noted as temporarily added and will be integrated in the future. The README does not specify the exact license, which could impact commercial adoption.
5 years ago
Inactive