๐Ÿ› ๏ธ How to Use This Demo

  1. Upload a front-facing video or a folder of images of a static scene.
  2. Use the sliders to configure the number of reference views, correspondences, and optimization steps.
  3. Click ๐Ÿš€ Start Reconstruction to launch the pipeline.
  4. Watch the training visualization and explore the 3D model. โ€ผ๏ธ If you see nothing in the 3D model viewer, try rotating or zooming โ€” sometimes the initial camera orientation is off.

โœ… Best for scenes with small camera motion. โ— For full 360ยฐ or large-scale scenes, we recommend the Colab version (see project page).

โณ Processing, please wait...

๐Ÿ“ฅ Upload Input

4 32
5000 20000
200 5000
Select Method

๐Ÿ‹๏ธ Training Visualization

๐ŸŒ Final 3D Model

๐Ÿ“ฆ Output Files


๐Ÿ“– Detailed Overview

If you uploaded a video, it will be automatically cut into a smaller number of frames (default: 16).

The model pipeline:

  1. ๐Ÿง  Runs PyCOLMAP to estimate camera intrinsics & poses (~3โ€“7 seconds for <16 images).
  2. ๐Ÿ” Computes 2D-2D correspondences between views. More correspondences generally improve quality.
  3. ๐Ÿ”ง Optimizes a 3D Gaussian Splatting model for several steps.

๐ŸŽฅ Training Visualization

You will see a visualization of the entire training process in the "Training Video" pane.

๐ŸŒ€ Rendering & 3D Model

  • Render the scene from a circular path of novel views.
  • Or from camera views close to the original input.

The 3D model is shown in the right viewer. You can explore it interactively:

  • On PC: WASD keys, arrow keys, and mouse clicks
  • On mobile: pan and pinch to zoom

๐Ÿ•’ Note: the 3D viewer takes a few extra seconds (~5s) to display after training ends.


Preloaded models coming soon. (TODO)