T3Bench  by THU-LYJ-Lab

Text-to-3D generation benchmark

created 1 year ago
1,099 stars

Top 35.3% on sourcepulse

GitHubView on GitHub
Project Summary

T3Bench provides a comprehensive benchmark for evaluating text-to-3D generation models. It addresses the need for standardized assessment by offering 300 diverse text prompts across three complexity levels, along with novel automatic metrics for quality and text alignment. This benchmark is designed for researchers and developers in the 3D generation field.

How It Works

T3Bench leverages multi-view images generated from 3D content to assess quality and text alignment. The quality metric combines multi-view text-image scores with regional convolution to detect inconsistencies. The alignment metric uses multi-view captioning and LLM evaluation to measure text-3D consistency, aiming for efficient and reliable evaluation.

Quick Start & Requirements

  • Installation: Requires setting up the ThreeStudio environment first, then installing additional packages via pip install -r requirements.txt.
  • Prerequisites: A GPU is required for generation and evaluation.
  • Usage: Scripts are provided for running generation (run_t3.py), extracting meshes (run_mesh.py), quality evaluation (run_eval_quality.py), and alignment evaluation (run_caption.py, run_eval_alignment.py).
  • Resources: Links to the paper, project page, and acknowledgments of supporting projects are available.

Highlighted Details

  • First comprehensive benchmark for text-to-3D generation.
  • Includes 300 diverse text prompts across three complexity levels.
  • Proposes two automatic metrics (quality and alignment) correlating with human judgments.
  • Supports evaluation of multiple text-to-3D methods including latent-NeRF, Magic3D, and DreamFusion.

Maintenance & Community

The project acknowledges contributions from open-source works like ThreeStudio, Cap3D, Stable-DreamFusion, ImageReward, and LAVIS. Further community or maintenance details are not explicitly provided in the README.

Licensing & Compatibility

The README does not explicitly state a license. Compatibility for commercial use or closed-source linking is not specified.

Limitations & Caveats

The benchmark relies on the ThreeStudio implementation, which may introduce specific dependencies or limitations. The evaluation metrics are automatic and their correlation with human judgment, while claimed to be close, may still have nuances not fully captured.

Health Check
Last commit

1 year ago

Responsiveness

1+ week

Pull Requests (30d)
0
Issues (30d)
0
Star History
3 stars in the last 90 days

Explore Similar Projects

Feedback? Help us improve.