Code release for real-time interactive environment creation from video
Top 85.2% on sourcepulse
This project provides a framework for generating interactive, browser-compatible 3D environments from single videos. It targets researchers and developers in computer vision and graphics, enabling the creation of realistic 3D scenes with features like mesh extraction, collision model generation, and texture baking.
How It Works
The system leverages Neural Radiance Fields (NeRF) for scene representation, extracting meshes and generating priors like depth and normals using external models (e.g., Omnidata). It supports semantic and instance segmentation for detailed mesh separation and texture completion, culminating in the generation of game-ready assets.
Quick Start & Requirements
conda create -n video2game python=3.7
), activate it, and install dependencies including PyTorch (tested with CUDA 11.6, Torch 1.12.0), torch-scatter, tiny-cuda-nn, nvdiffrast, and pymesh.git clone --recursive
for tiny-cuda-nn, and potentially mmsegmentation for KITTI-360 data.normals/
, depth/
, and instance/
directories.Highlighted Details
Maintenance & Community
ngp-pl
, instant-ngp-pp
, nerf2mesh
, and sketchbook
.Licensing & Compatibility
Limitations & Caveats
1 year ago
Inactive