Skip to content

yonglinZ/CropSight-SAM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

Collecting accurate ground truth data of crop types is a crucial challenge for agricultural research and development. The CropSight Framework is an open-source toolkit designed to automate the retrieval of object-based crop type information from massive Google Street View (GSV) images. With its scalable and efficient features, CropSight enables the automatic identification of GSV images and boundary delineation to generate in-situ object-based crop-type labels over large areas.

Key Components

  • Large-Scale Operational Cropland Field-View Imagery Collection Method: Systematically acquires representative geotagged cropland field-view images.
  • Uncertainty-Aware Crop Type Image Classification Model (UncertainFusionNet): Retrieves high-quality crop type labels with quantified uncertainty.
  • Cropland Boundary Delineation Model (SAM): Delineates cropland boundaries using PlanetScope satellite imagery.

Workflow


Figure 1: CropSight Flowchart.

Dataset

  • UncertainFusionNet


Figure 2: Crop type ground-level view dataset (CropGSV) used to train UncertainFusionNet.

  • SAM


Figure 3: Cropland boundary ground-truth dataset (CropBoundary) used to fine-tune SAM.

Application

Using the CropSight framework, we collected crop type ground truth data from Google Street View and PlanetScope satellite imagery. Below are examples of the application of CropSight in the US and Brazil.

  • Example 1: Brazil


Figure 4: Object-based crop type ground truth map produced by CropSight using the latest images (2023) in Brazil. Crop type labels are overlaid on Google Earth imagery. The accuracy of crop type classification and boundary delineation is assessed by randomly sampling and comparing against visually interpreted GSV-based ground truth data.

  • Example 2: United States


Figure 5: Object-based crop type ground truth maps produced by CropSight using the latest images (2023). These maps represent four distinct study areas in the United States (A-D). (a) Overlay of crop type labels on Google Maps. (b) Overlay of crop type labels on off-season PlanetScope images.

Example of Retrieving One Ground Truth

To see an example of how to retrieve one ground truth using the CropSight framework, refer to the CropSight.ipynb.

Author

Yin Liu ([email protected])

Chunyuan Diao ([email protected])

Remote Sensing Space-Time Innovation Lab

Department of Geography & GIScience, University of Illinois at Urbana-Champaign

Acknowledgement

This project is supported by the National Science Foundation’s Office of Advanced Cyberinfrastructure under grant 2048068.

Citation

If you use this work in any way, please mention this citation:

@article
{Title: Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images,
 Authors: Liu, Yin and Diao, Chunyuan and Mei, Weiye and Zhang, Chishan,
 Publication: ISPRS Journal of Photogrammetry and Remote Sensing,
 Year: 2024,
 Volume:216
 Page: 66-89,
 DOI: 10.1016/j.isprsjprs.2024.07.025}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%