Meta-learning code for RL experiments
Top 52.5% on sourcepulse
This repository provides the code for the Model-Agnostic Meta-Learning (MAML) paper, focusing on few-shot reinforcement learning experiments. It enables researchers and practitioners to quickly adapt deep learning models to new tasks with minimal data, a key challenge in RL.
How It Works
MAML is a meta-learning algorithm that learns a model's initial parameters such that it can be rapidly fine-tuned for new tasks. The core idea is to optimize for a sensitive initialization that performs well after a few gradient steps on a new task, using a second-order meta-gradient update. This approach allows for fast adaptation without requiring task-specific architectures.
Quick Start & Requirements
rllab
framework. Follow rllab
installation instructions.rllab
.Highlighted Details
Maintenance & Community
Licensing & Compatibility
rllab
framework, which this code is based on, is compatible with OpenAI Gym.maml_rl
itself are not explicitly stated in the README, but rllab
's licensing should be considered.Limitations & Caveats
The code is noted as being particularly slow, with suggestions for contributions to improve parallelization and graph computation speed.
2 years ago
1 week