Skip to content

3. Photogrammetry

Azure edited this page Jan 18, 2025 · 3 revisions

Photogrammetry Overview

Photogrammetry plays a crucial role in creating highly detailed and accurate 3D models of individual buildings, which can significantly enhance the spatial visualization capabilities of web-based GIS platforms. Unlike traditional photogrammetry applications focused on creating wide-scale terrain maps, this project leverages photogrammetry to reconstruct intricate 3D models of buildings on institutional campuses. These models can then be integrated into the web map to provide users with an interactive and realistic representation of their surroundings.

In this system, the photogrammetry process begins with capturing high-resolution aerial images using a drone equipped with a GPS module and a high-quality camera. These images serve as the foundational dataset for generating 3D spatial models. For this project, we will be working in Meshroom, an open-source photogrammetry software designed for structure-from-motion (SfM) workflows.

Workflow

Steps in the Photogrammetry Workflow with Meshroom:-

  • Image Acquisition: The autonomous drone systematically surveys the designated area, capturing high-resolution images with a 60-70% overlap. The drone follows a systematic flight path, ensuring thorough coverage with sufficient detail for later reconstruction.

  • Data Preprocessing: Poor-quality and blurry images are manually excluded to ensure that the final model is precise.

  • 3D Model Generation: The images are then uploaded into meshroom, an open-source photogrammetry software A dense 3D point cloud is created in meshroom by identifying and matching feature points across overlapping images. The dense point cloud is refined to produce a triangulated mesh, which represents the area in three dimensions. The mesh is used to generate an Orthomosaic, and textures are applied using data from the original image dataset

  • Postprocessing in Blender: Once the 3D model is generated if required, it is imported into Blender for post-processing to address any inaccuracies or inconsistencies in the model. Blender is then used to refine the generated 3D model by

                  -Eliminating stray points or unnecessary artifacts.
                  -Repairing holes or incomplete areas in the mesh.
                  -Applying smoothing tools to refine jagged or uneven areas.
    
  • Integration with web map: The final 3D model is then integrated into the web map, implementing an option to click on any building on the 2D map and be able to see the full 3D model.

Clone this wiki locally