๐ EDGS: Eliminating Densification for Efficient Convergence of 3DGS
๐ Project Page
๐ ๏ธ How to Use This Demo
- Upload a front-facing video or a folder of images of a static scene.
- Use the sliders to configure the number of reference views, correspondences, and optimization steps.
- Click ๐ Start Reconstruction to launch the pipeline.
- Watch the training visualization and explore the 3D model. โผ๏ธ If you see nothing in the 3D model viewer, try rotating or zooming โ sometimes the initial camera orientation is off.
โ Best for scenes with small camera motion. โ For full 360ยฐ or large-scale scenes, we recommend the Colab version (see project page).
โณ Processing, please wait...
๐ฅ Upload Input
4 32
5000 20000
200 5000
๐๏ธ Training Visualization
๐ Final 3D Model
๐ฆ Output Files
๐ Detailed Overview
If you uploaded a video, it will be automatically cut into a smaller number of frames (default: 16).
The model pipeline:
- ๐ง Runs PyCOLMAP to estimate camera intrinsics & poses (~3โ7 seconds for <16 images).
- ๐ Computes 2D-2D correspondences between views. More correspondences generally improve quality.
- ๐ง Optimizes a 3D Gaussian Splatting model for several steps.
๐ฅ Training Visualization
You will see a visualization of the entire training process in the "Training Video" pane.
๐ Rendering & 3D Model
- Render the scene from a circular path of novel views.
- Or from camera views close to the original input.
The 3D model is shown in the right viewer. You can explore it interactively:
- On PC: WASD keys, arrow keys, and mouse clicks
- On mobile: pan and pinch to zoom
๐ Note: the 3D viewer takes a few extra seconds (~5s) to display after training ends.
Preloaded models coming soon. (TODO)