supervised-reptile  by openai

Meta-learning code for Omniglot and Mini-ImageNet image datasets

Created 7 years ago
1,019 stars

Top 36.7% on SourcePulse

GitHubView on GitHub
Project Summary

This repository provides the implementation for the Reptile meta-learning algorithm, focusing on finding good initializations for few-shot learning tasks. It is intended for researchers and practitioners in meta-learning and few-shot learning who want to reproduce or extend the results from the paper "On First-Order Meta-Learning Algorithms."

How It Works

Reptile is a first-order meta-learning algorithm that operates by repeatedly sampling a task, training on that task for a few steps, and then updating the meta-model's initialization towards the task-specific weights. This process aims to find an initialization that is broadly applicable across a distribution of tasks, enabling rapid adaptation to new, unseen tasks with minimal data.

Quick Start & Requirements

  • Install/Run: Clone the repository and run the provided Python scripts (e.g., run_omniglot.py, run_miniimagenet.py).
  • Prerequisites: Python, requires downloading datasets (~5GB) using the fetch_data.sh script.
  • Setup Time: Data download takes 10-20 minutes. Training runs can be lengthy, with the paper's examples using up to 100,000 meta-iterations.
  • Links: Paper (implied by repository name and description).

Highlighted Details

  • Implements Reptile, a first-order meta-learning algorithm.
  • Supports transductive and non-transductive evaluation modes.
  • Includes scripts for reproducing paper results on Omniglot and Mini-ImageNet datasets.
  • Allows comparison of different inner-loop gradient update strategies.

Maintenance & Community

  • Status: Archived (code is provided as-is, no updates expected).
  • No community links (Discord, Slack, etc.) are provided in the README.

Licensing & Compatibility

  • The README does not explicitly state a license.

Limitations & Caveats

The repository is archived and will not receive further updates. The current implementation does not support resuming training from checkpoints.

Health Check
Last Commit

2 years ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 30 days

Explore Similar Projects

Starred by Victor Taelin Victor Taelin(Author of Bend, Kind, HVM), Sebastian Raschka Sebastian Raschka(Author of "Build a Large Language Model (From Scratch)"), and
2 more.

nanoT5 by PiotrNawrot

0.2%
1k
PyTorch code for T5 pre-training and fine-tuning on a single GPU
Created 2 years ago
Updated 1 year ago
Feedback? Help us improve.