SFTvsRL  by LeslieTrue

Research paper comparing SFT and RL for foundation model post-training

created 6 months ago
287 stars

Top 92.3% on sourcepulse

GitHubView on GitHub
Project Summary

This repository provides the official implementation for the paper "SFT Memorizes, RL Generalizes," comparing Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) for post-training foundation models. It targets researchers and engineers working on LLM alignment and generalization, offering tools to reproduce study findings and evaluate API-based models.

How It Works

The project implements two primary post-training paradigms: SFT and RL (specifically PPO). It leverages Llama-3.2-Vision-Instruct as the base model, initializing RL experiments with SFT-tuned checkpoints to ensure baseline instruction-following. The codebase includes custom gym environments for evaluation and specific scripts for training and evaluating both "GeneralPoints" and "V-IRL" tasks, supporting both language-only and vision-language modalities.

Quick Start & Requirements

  • Install: Clone repo, create conda env (conda create -n SFTvsRL python==3.13), activate, pip install -r requirements.txt, cd gym && pip install -e ..
  • Prerequisites: Python 3.13, PyTorch 2.5.1+cu124, H800 servers (or equivalent 8x 80GB GPU nodes for training).
  • Checkpoints: Optional download of SFT-initialized checkpoints via huggingface-cli download.
  • Data: V-IRL requires downloading specific datasets and updating paths in shell scripts.
  • Docs: Paper

Highlighted Details

  • Comparative study of SFT vs. RL post-training methodologies.
  • Supports both language-only and vision-language tasks.
  • Includes custom gym environments for evaluation.
  • Scripts are compatible with SLURM clusters.

Maintenance & Community

The project acknowledges contributions from RL4VLM, Llama-3.2-Vision-Instruct, Llama-3.2-Vision-Finetune, and V-IRL: Grounding Virtual Intelligence in Real Life. No specific community links (Discord/Slack) or roadmap are provided in the README.

Licensing & Compatibility

The repository's license is not explicitly stated in the README. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

Reproducing training experiments requires a high-end compute setup (8x 80GB GPUs). The project is based on Llama-3.2-Vision-Instruct, and performance with other models may vary. Some components are still being updated.

Health Check
Last commit

3 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
20 stars in the last 90 days

Explore Similar Projects

Starred by Ross Taylor Ross Taylor(Cofounder of General Reasoning; Creator of Papers with Code), Daniel Han Daniel Han(Cofounder of Unsloth), and
4 more.

open-instruct by allenai

0.2%
3k
Training codebase for instruction-following language models
created 2 years ago
updated 14 hours ago
Starred by Chip Huyen Chip Huyen(Author of AI Engineering, Designing Machine Learning Systems), Jeff Hammerbacher Jeff Hammerbacher(Cofounder of Cloudera), and
10 more.

open-r1 by huggingface

0.2%
25k
SDK for reproducing DeepSeek-R1
created 6 months ago
updated 3 days ago
Feedback? Help us improve.