Evaluation resources for visual generation models
Top 83.6% on sourcepulse
This repository is a curated list of academic papers, metrics, and systems for evaluating visual generation models. It serves researchers and practitioners in AI, computer vision, and graphics who need to assess the quality, consistency, and trustworthiness of generated images and videos. The project aims to consolidate the rapidly evolving landscape of evaluation methodologies for generative AI.
How It Works
The repository categorizes evaluation approaches into metrics (e.g., FID, CLIP Score), systems (e.g., benchmarks, evaluation frameworks), and specific application areas (e.g., text-to-image, video generation, image editing). It provides links to papers and, where available, code, offering a comprehensive overview of the state-of-the-art in evaluating generative models. The structure allows users to quickly find relevant evaluation techniques for their specific needs.
Quick Start & Requirements
This is a curated list of resources, not a software package. No installation or execution is required.
Highlighted Details
Maintenance & Community
The repository is updated periodically. Suggestions for additional resources, updates, or fixes can be submitted via Issues or Pull Requests. Contact is also available via email.
Licensing & Compatibility
The repository itself is a collection of links and information, not software. The licensing of the linked papers and code varies by their respective sources.
Limitations & Caveats
As a curated list, the repository's content is dependent on the availability and accuracy of the linked external resources. Some links may become outdated, and the rapid pace of research means the list may not always be exhaustive.
2 days ago
1+ week