CropSight: towards a large-scale operational framework for object-based crop type ground truth retrieval using street view and PlanetScope satellite imagery
Collecting accurate ground truth data of crop types is a crucial challenge for agricultural research and development. The CropSight Framework is an open-source toolkit designed to automate the retrieval of object-based crop type information from massive Google Street View (GSV) images. With its scalable and efficient features, CropSight enables the automatic identification of GSV images and boundary delineation to generate in-situ object-based crop-type labels over large areas.
- Large-Scale Operational Cropland Field-View Imagery Collection Method: Systematically acquires representative geotagged cropland field-view images.
- Uncertainty-Aware Crop Type Image Classification Model (UncertainFusionNet): Retrieves high-quality crop type labels with quantified uncertainty.
- Cropland Boundary Delineation Model (SAM): Delineates cropland boundaries using PlanetScope satellite imagery.
Figure 1: CropSight Flowchart.
Figure 2: Crop type ground-level view dataset (CropGSV) used to train UncertainFusionNet.
Figure 3: Cropland boundary ground-truth dataset (CropBoundary) used to fine-tune SAM.
Using the CropSight framework, we collected crop type ground truth data from Google Street View and PlanetScope satellite imagery. Below are examples of the application of CropSight in the US and Brazil.
Figure 4: Object-based crop type ground truth map produced by CropSight using the latest images (2023) in Brazil. Crop type labels are overlaid on Google Earth imagery. The accuracy of crop type classification and boundary delineation is assessed by randomly sampling and comparing against visually interpreted GSV-based ground truth data.
Figure 5: Object-based crop type ground truth maps produced by CropSight using the latest images (2023). These maps represent four distinct study areas in the United States (A-D). (a) Overlay of crop type labels on Google Maps. (b) Overlay of crop type labels on off-season PlanetScope images.
To see an example of how to retrieve one ground truth using the CropSight framework, refer to the CropSight.ipynb.
Yin Liu ([email protected])
Chunyuan Diao ([email protected])
Remote Sensing Space-Time Innovation Lab
Department of Geography & GIScience, University of Illinois at Urbana-Champaign
This project is supported by the National Science Foundation’s Office of Advanced Cyberinfrastructure under grant 2048068.
If you use this work in any way, please mention this citation:
@article
{Title: Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images,
Authors: Liu, Yin and Diao, Chunyuan and Mei, Weiye and Zhang, Chishan,
Publication: ISPRS Journal of Photogrammetry and Remote Sensing,
Year: 2024,
Volume:216
Page: 66-89,
DOI: 10.1016/j.isprsjprs.2024.07.025}