@article{GaussianMove2025,
author = {Peizhen Zheng and Longfei Wei and Dongjing Jiang and Jianfei Zhang},
title = {{3D Gaussian} Splatting against Moving Objects for High-Fidelity Street Scene Reconstruction},
journal = {arXiv: 2503.12001},
year = {2025},
doi = {10.48550/arXiv.2503.12001}
}
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
The accurate reconstruction of dynamic street scenes is critical for applications in autonomous driving, augmented reality, and virtual reality. Traditional methods relying on dense point clouds and triangular meshes struggle with moving objects, occlusions, and real-time processing constraints, limiting their effectiveness in complex urban environments. While multi-view stereo and neural radiance fields have advanced 3D reconstruction, they face challenges in computational efficiency and handling scene dynamics. This paper proposes a novel 3D Gaussian point distribution method for dynamic street scene reconstruction. Our approach introduces an adaptive transparency mechanism that eliminates moving objects while preserving high-fidelity static scene details. Additionally, iterative refinement of Gaussian point distribution enhances geometric accuracy and texture representation. We integrate directional encoding with spatial position optimization to optimize storage and rendering efficiency, reducing redundancy while maintaining scene integrity. Experimental results demonstrate that our method achieves high reconstruction quality, improved rendering performance, and adaptability in large-scale dynamic environments. These contributions establish a robust framework for real-time, high-precision 3D reconstruction, advancing the practicality of dynamic scene modeling across multiple applications.
[2025.03.15] Many thanks to Longfei Wei and Jianfei Zhang for the improvements they made to the code
[2024.10.10] Many thanks to GaussianPro, Provide code for the project
Some amazing enhancements will also come out this year.
- [✔] Code pre-release -- Beta version.
- [✔] Demo Scenes.
- [✔] Pybinding & CUDA acceleration.
- Support for unordered sets of images.
Some amazing enhancements are under development. We warmly welcome anyone to collaborate to improve this repository. Please send me an email if you are interested!
window 11, GeForce 4070, CUDA 12.1 (tested), C++17
git clone https://github.com/ThinkXca/3DGS.git --recursive
conda env create --file environment.yml
pip install ./submodules/Propagation_SSIM
# Don't forget to modify the mask location before running the code. cameras.py 54
python train.py -s data/streeview --eval
python train.py -s data/peoplecarstreeview --eval
# Public dataset link:
# Nagoya:
https://drive.google.com/file/d/1rblrxazeeSCfnQ7QAUrK7_lLZVu5q54C/view?usp=sharing
# Quebec:
https://drive.google.com/file/d/1XbEOvhHi-3tWbAkUeg2Ecyi8zHMvHsbr/view?usp=drive_link
# Run the codes:
# The detailed parameter configuration can be found in the paper section.
# If you want to try your scenes, ensure your images are sorted in the time order, i.e. video data. The current version does not support unordered image sets. Then you can try the commands in demo.sh to run your scenes.
# Please ensure that your neighboring images have sufficient overlap.
This project largely references 3D Gaussian Splatting and ACMH/ACMM. Thanks for their amazing work!