Collection of papers on diffusion model alignment
Top 91.4% on sourcepulse
This repository is a curated collection of research papers focused on the alignment of diffusion models, primarily for text-to-image generation. It serves as a valuable resource for researchers and practitioners interested in making diffusion models adhere to human preferences and instructions. The collection aims to be comprehensive, covering various alignment techniques, benchmarks, and fundamental concepts.
How It Works
The repository organizes papers into categories such as "Alignment Techniques," "Benchmarks and Evaluation," and "Fundamentals." It highlights key methods like Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), and various prompt optimization strategies. The collection also includes foundational statistical and machine learning concepts relevant to preference modeling.
Quick Start & Requirements
This is a curated list of papers and does not involve code execution. The primary requirement is access to research papers, many of which are linked via PDF.
Highlighted Details
Maintenance & Community
The project is maintained by the xie-lab-ml community, with contributions welcomed for corrections and suggestions. The survey paper lists authors from various institutions, indicating a collaborative research effort.
Licensing & Compatibility
The repository itself is a collection of links to research papers. The licensing of the individual papers is determined by their respective publishers or preprint servers. Compatibility for commercial use or closed-source linking depends on the licenses of the cited works.
Limitations & Caveats
This repository is a literature survey and does not provide executable code or pre-trained models. The rapid pace of research means the collection may not be exhaustive or entirely up-to-date with the very latest publications.
5 days ago
Inactive