Discover and explore top open-source AI tools and projects—updated daily.
paulengstler3D world generator from text prompts, no training
Top 51.9% on SourcePulse
SynCity generates complex, navigable 3D worlds from text prompts without requiring any training or optimization. It targets researchers and developers interested in procedural content generation for virtual environments, offering a novel approach to creating detailed scenes by combining pre-trained 2D and 3D generative models.
How It Works
SynCity employs a tile-by-tile generation process. It first uses the Flux 2D generator for artistic consistency and then the TRELLIS 3D generator for accurate geometry. Each tile is generated as a 2D image, ensuring context from adjacent tiles for coherence. These 2D tiles are then converted into 3D models, and adjacent tiles are seamlessly blended to form a complete, navigable environment. This method leverages existing powerful generative models without the need for custom training.
Quick Start & Requirements
source ./setup.sh --new-env --basic --xformers --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast. Requires CUDA_HOME environment variable../inpainting_server.sh --run), then run python run_pipeline.py for tile generation and python blend_gaussians.py for blending.Highlighted Details
Maintenance & Community
The project is associated with the Visual Geometry Group at the University of Oxford. Further community or maintenance details are not explicitly provided in the README.
Licensing & Compatibility
The project's licensing is not specified in the README. Compatibility for commercial use or closed-source linking is not detailed.
Limitations & Caveats
Requires substantial GPU memory (48GB+). The setup process involves multiple external dependencies and model agreements. Prompt engineering is crucial for optimal results, with specific guidance provided for better world generation.
6 months ago
Inactive
openai