Skip to content

Official Pytorch implementation of DropGaussian: Structural Regularization for Sparse-view Gaussian Splatting [CVPR25]

License

Notifications You must be signed in to change notification settings

ueoo/DropGaussian

 
 

Repository files navigation

DropGaussian: Structural Regularization
for Sparse-view Gaussian Splatting

Official Pytorch implementation "DropGaussian: Structural Regularization for Sparse-view Gaussian Splatting" Hyunwoo Park, Gun Ryu, and Wonjun Kim (Corresponding Author)
🎸IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2025.🎸

Video

[ Training pipeline ]

👀 Overview

We propose a simple yet powerful regularization technique, DropGaussian, for neural rendering with sparse input views.

By randomly eliminating Gaussians during the training process, DropGaussian gives the opportunity for the remaining Gaussians to be more visible with larger gradients, which make them to meaningfully contribute to the optimization process of 3DGS. This is fairly desirable to alleviate the overfitting problem occurring in sparse-view conditions.

We provide:

  • 🚀 Minimal plug-and-play code snippet for quick integration
  • Full implementation of DropGaussian

🚀 Quick Snippet

Here's a minimal example of how to use DropGaussian in your training loop:

import torch
# (Assume the rest of the 3DGS pipeline is already set up)
# Create initial compensation factor (1 for each Gaussian)
compensation = torch.ones(opacity.shape[0], dtype=torch.float32, device="cuda")

# Apply DropGaussian with compensation
drop_rate = 0.1
d = torch.nn.Dropout(p=drop_rate)
compensation = d(compensation)

# Apply to opacity
opacity = opacity * compensation[:, None]

✅ Full implementation

📦 Installation

We provide an installation using Conda package and environment management:

git clone https://github.com/DCVL-3D/DropGaussian_release
cd DropGaussian_release
conda env create --file environment.yaml
conda activate DropGaussian

Note: This Conda environment assumes that CUDA 12.1 is already installed on your system.

🗂️ Data Preparation

In the data preparation stage, we first reconstruct sparse-view inputs using Structure-from-Motion (SfM) with the provided camera poses from the datasets. Then, we perform dense stereo matching using COLMAP’s patch_match_stereo function, followed by stereo_fusion to generate the dense stereo point cloud.

Setup Instructions
mkdir dataset
cd dataset

# Download LLFF dataset
gdown 16VnMcF1KJYxN9QId6TClMsZRahHNMW5g

# Generate sparse point cloud using COLMAP (limited views) for LLFF
python tools/colmap_llff.py

# Download MipNeRF-360 dataset
wget http://storage.googleapis.com/gresearch/refraw360/360_v2.zip
unzip -d mipnerf360 360_v2.zip

# Generate sparse point cloud using COLMAP (limited views) for MipNeRF-360
python tools/colmap_360.py

We also provide preprocessed sparse and dense point clouds for convenience. You can download them via the link below:

👉 Download Preprocessed Point Clouds

🧪 Training

🔹 LLFF Dataset

To train on a single LLFF scene, use the following command:

python train.py -s ${DATASET_PATH} -m ${OUTPUT_PATH} --eval -r 8 --n_views {3 or 6 or 9}

To train and evaluate on all LLFF scenes, simply run the script below:

bash scripts/train_llff.sh

🔹 MipNeRF-360 Dataset

To train on a single MipNeRF-360 scene, use the following command:

python train.py -s ${DATASET_PATH} -m ${OUTPUT_PATH} --eval -r 8 --n_views {12 or 24}

To train and evaluate on all MipNeRF-360 scenes, simply run the script below:

bash scripts/train_mipnerf360.sh

🎬 Rendering & Evaluation

You can perform rendering and evaluation in a single step using the following command:

🔹 LLFF Dataset

python render.py -s -m ${MODEL_PATH} --eval -r 8

🔹 MipNeRF-360 Dataset

python render.py -s -m ${MODEL_PATH} --eval -r 8

License

This project is licensed under the Apache License 2.0, with the exception of certain components derived from the Gaussian Splatting project.

  • Apache License 2.0: All original code written for DropGaussian is released under the Apache 2.0 license. See LICENSE.
  • Non-commercial License (Inria & MPII): Some parts of the code are based on Gaussian Splatting, which is licensed for non-commercial research use only. See LICENSE_GAUSSIAN_SPLATTING.md for full terms.

Please ensure that you comply with both licenses when using this repository.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2023-NR076462) and Institute of Information Communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2018-0-00207, RS-2018-II180207, Immersive Media Research Laboratory).

Our implementation and experiments are built on top of open-source GitHub repositories. We thank all the authors who made their code public, which tremendously accelerates our project progress. If you find these works helpful, please consider citing them as well.

GraphDeco-INRIA/gaussian-splatting
VITA-Group/FSGS

Citation

If you find our work useful for your project, please consider citing the following paper.

@misc{park2025dropgaussianstructuralregularizationsparseview,
      title={DropGaussian: Structural Regularization for Sparse-view Gaussian Splatting}, 
      author={Hyunwoo Park and Gun Ryu and Wonjun Kim},
      year={2025},
      eprint={2504.00773},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.00773}, 
}

About

Official Pytorch implementation of DropGaussian: Structural Regularization for Sparse-view Gaussian Splatting [CVPR25]

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 37.4%
  • Cuda 23.0%
  • C 21.2%
  • C++ 17.8%
  • Other 0.6%