This document lists resources for performing deep learning (DL) on satellite imagery. To a lesser extent classical Machine learning (ML, e.g. random forests) are also discussed, as are classical image processing techniques.
- Top links
- Datasets
- Interesting deep learning projects
- Techniques
- Image formats, data management and catalogues
- State of the art
- Online platforms for Geo analysis
- Free online computing resources
- Production
- Useful open source software
- Movers and shakers on Github
- Companies on Github
- Courses
- Online communities
- Jobs
- Neural nets in space
- About the author
- awesome-satellite-imagery-datasets
- awesome-earthobservation-code
- awesome-sentinel
- A modern geospatial workflow
- geospatial-machine-learning
- Long list of satellite missions with example imagery
- AWS datasets
- Warning satellite image files can be LARGE, even a small data set may comprise 50 GB of imagery.
- Various datasets listed here and at awesome-satellite-imagery-datasets
- A commercial satellite owned by DigitalGlobe
- https://en.wikipedia.org/wiki/WorldView-3
- 0.3m PAN, 1.24 MS, 3.7m SWIR. Off-Nadir (stereo) available.
- Owned by DigitalGlobe
- Getting Started with SpaceNet
- Dataset on AWS -> see this getting started notebook and this notebook on the off-Nadir dataset
- cloud_optimized_geotif here used in the 3D modelling notebook here.
- Package of utilities to assist working with the SpaceNet dataset.
- WorldView cloud optimized geotiffs used in the 3D modelling notebook here.
- For more Worldview imagery see Kaggle DSTL competition.
- As part of the EU Copernicus program, multiple Sentinel satellites are capturing imagery -> see wikipedia.
- 13 bands, Spatial resolution of 10 m, 20 m and 60 m, 290 km swath, the temporal resolution is 5 days
- awesome-sentinel - a curated list of awesome tools, tutorials and APIs related to data from the Copernicus Sentinel Satellites.
- Sentinel-2 Cloud-Optimized GeoTIFFs and Sentinel-2 L2A 120m Mosaic
- Open access data on GCP
- Paid access via sentinel-hub and python-api.
- Example loading sentinel data in a notebook
- so2sat on Tensorflow datasets - So2Sat LCZ42 is a dataset consisting of co-registered synthetic aperture radar and multispectral optical image patches acquired by the Sentinel-1 and Sentinel-2 remote sensing satellites, and the corresponding local climate zones (LCZ) label. The dataset is distributed over 42 cities across different continents and cultural regions of the world.
- eurosat - EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples. Dataset and usage in EuroSAT: Land Use and Land Cover Classification with Sentinel-2, where a CNN achieves a classification accuracy 98.57%.
- bigearthnet - The BigEarthNet is a new large-scale Sentinel-2 benchmark archive, consisting of 590,326 Sentinel-2 image patches. The image patch size on the ground is 1.2 x 1.2 km with variable image size depending on the channel resolution. This is a multi-label dataset with 43 imbalanced labels.
- Jupyter Notebooks for working with Sentinel-5P Level 2 data stored on S3. The data can be browsed here
- Sentinel NetCDF data
- Analyzing Sentinel-2 satellite data in Python with Keras
- Long running US program -> see Wikipedia and read the official webpage
- 8 bands, 15 to 60 meters, 185km swath, the temporal resolution is 16 days
- DECEMBER 2020: USGS publishes Landsat Collection 2 Dataset with 'significant geometric and radiometric improvements'. COG and STAC data format. Announcement and website. Beware data on Google and AWS (below) may be in different formats.
- Landsat 4, 5, 7, and 8 imagery on Google, see the GCP bucket here, with Landsat 8 imagery in COG format analysed in this notebook
- Landsat 8 imagery on AWS, with many tutorials and tools listed
- https://github.com/kylebarron/landsat-mosaic-latest -> Auto-updating cloudless Landsat 8 mosaic from AWS SNS notifications
- Visualise landsat imagery using Datashader
- Landsat-mosaic-tiler -> The repo host all the code for landsatlive.live website and APIs.
- Spacenet is an online hub for data, challenges, algorithms, and tools.
- spacenet.ai website covering the series of SpaceNet challenges, lots of useful resources (blog, video and papers)
- The SpaceNet 7 Multi-Temporal Urban Development Challenge: Dataset Release
- SpaceNet - WorldView-3 article here, and semantic segmentation using Raster Vision
- Planet’s high-resolution, analysis-ready mosaics of the world’s tropics, supported through Norway’s International Climate & Forests Initiative. BBC coverage
- Shuttle Radar Topography Mission: data - open access
- Copernicus Digital Elevation Model (DEM) on S3, represents the surface of the Earth including buildings, infrastructure and vegetation. Data is provided as Cloud Optimized GeoTIFFs. link
Kaggle hosts over 60 satellite image datasets, search results here. The kaggle blog is an interesting read.
- https://www.kaggle.com/c/planet-understanding-the-amazon-from-space/data
- 3-5 meter resolution GeoTIFF images from planet Dove satellite constellation
- 12 classes including - cloudy, primary + waterway etc
- 1st place winner interview - used 11 custom CNN
- FastAI Multi-label image classification
- Multi-Label Classification of Satellite Photos of the Amazon Rainforest
- https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection
- Rating - medium, many good examples (see the Discussion as well as kernels), but as this competition was run a couple of years ago many examples use python 2
- WorldView 3 - 45 satellite images covering 1km x 1km in both 3 (i.e. RGB) and 16-band (400nm - SWIR) images
- 10 Labelled classes include - Buildings, Road, Trees, Crops, Waterway, Vehicles
- Interview with 1st place winner who used segmentation networks - 40+ models, each tweaked for particular target (e.g. roads, trees)
- Deepsense 4th place solution
- My analysis here
- https://www.kaggle.com/c/airbus-ship-detection/overview
- Rating - medium, most solutions using deep-learning, many kernels, good example kernel.
- I believe there was a problem with this dataset, which led to many complaints that the competition was ruined.
- https://www.kaggle.com/c/draper-satellite-image-chronology/data
- Rating - hard. Not many useful kernels.
- Images are grouped into sets of five, each of which have the same setId. Each image in a set was taken on a different day (but not necessarily at the same time each day). The images for each set cover approximately the same area but are not exactly aligned.
- Kaggle interviews for entrants who used XGBOOST and a hybrid human/ML approach
Not satellite but airborne imagery. Each sample image is 28x28 pixels and consists of 4 bands - red, green, blue and near infrared. The training and test labels are one-hot encoded 1x6 vectors. Each image patch is size normalized to 28x28 pixels. Data in .mat
Matlab format. JPEG?
- Imagery source
- Sat4 500,000 image patches covering four broad land cover classes - barren land, trees, grassland and a class that consists of all land cover classes other than the above three
- Sat6 405,000 image patches each of size 28x28 and covering 6 landcover classes - barren land, trees, grassland, roads, buildings and water bodies.
- Deep Gradient Boosted Learning article
In this challenge, you will build a model to classify cloud organization patterns from satellite images.
- https://www.kaggle.com/c/understanding_cloud_organization/
- 3rd place solution on Github by naivelamb
- https://www.kaggle.com/reubencpereira/spatial-data-repo -> Satellite + loan data
- https://www.kaggle.com/towardsentropy/oil-storage-tanks -> Image data of industrial tanks with bounding box annotations, estimate tank fill % from shadows
- https://www.kaggle.com/rhammell/ships-in-satellite-imagery -> Classify ships in San Franciso Bay using Planet satellite imagery
- https://www.kaggle.com/rhammell/planesnet -> Detect aircraft in Planet satellite image chips
There are a variety of datasets suitable for land classification problems.
- There are a number of remote sensing datasets
- resisc45 - RESISC45 dataset is a publicly available benchmark for Remote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU). This dataset contains 31,500 images, covering 45 scene classes with 700 images in each class.
- eurosat - EuroSAT dataset is based on Sentinel-2 satellite images covering 13 spectral bands and consisting of 10 classes with 27000 labeled and geo-referenced samples.
- bigearthnet - The BigEarthNet is a new large-scale Sentinel-2 benchmark archive, consisting of 590,326 Sentinel-2 image patches. The image patch size on the ground is 1.2 x 1.2 km with variable image size depending on the channel resolution. This is a multi-label dataset with 43 imbalanced labels.
- http://weegee.vision.ucmerced.edu/datasets/landuse.html
- Available as a Tensorflow dataset -> https://www.tensorflow.org/datasets/catalog/uc_merced
- This is a 21 class land use image dataset meant for research purposes.
- There are 100 RGB TIFF images for each class
- Each image measures 256x256 pixels with a pixel resolution of 1 foot
- Image classification of UCMerced using Keras or alternatively fastai
- Earth on AWS is the AWS equivalent of Google Earth Engine
- Currently 27 satellite datasets on the Registry of Open Data on AWS
- USBuildingFootprints -> computer generated building footprints in all 50 US states, GeoJSON format, generated using semantic segmentation
- Checkout Microsofts Planetary Computer project
- Several people have uploaded datasets to Quilt
- https://developers.google.com/earth-engine/
- Various imagery and climate datasets, including Landsat & Sentinel imagery
- Python API but all compute happens on Googles servers
- Google Earth Engine Community on Github
- awesome-google-earth-engine - Curated list of Google Earth Engine resources
- ee-tensorflow-notebooks - Repository to place example notebooks for Deep Learning applications with TensorFlow and Earth Engine.
- geemap -> a python package for interactive mapping with Google Earth Engine, ipyleaflet, and ipywidgets.
- eemont -> extends Google Earth Engine with pre-processing and processing tools for the most used satellite platforms.
- EEwPython -> A series of Jupyter (colab) notebook to learn Google Earth Engine with Python
- UK met-odffice -> https://www.metoffice.gov.uk/datapoint
- NASA (make request and emailed when ready) -> https://search.earthdata.nasa.gov
- NOAA (requires BigQuery) -> https://www.kaggle.com/noaa/goes16/home
- Time series weather data for several US cities -> https://www.kaggle.com/selfishgene/historical-hourly-weather-data
- BreizhCrops -> A Time Series Dataset for Crop Type Mapping
- Many on https://www.visualdata.io
- AU-AIR dataset -> a multi-modal UAV dataset for object detection.
- ERA -> A Dataset and Deep Learning Benchmark for Event Recognition in Aerial Videos.
- Aerial Maritime Drone Dataset
- Stanford Drone Dataset
- RetinaNet for pedestrian detection
- Aerial Maritime Drone Dataset
- EmergencyNet -> identify fire and other emergencies from a drone
- OpenDroneMap -> generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images.
- Dataset of thermal and visible aerial images for multi-modal and multi-spectral image registration and fusion -> The dataset consists of 30 visible images and their metadata, 80 thermal images and their metadata, and a visible georeferenced orthoimage.
- BIRDSAI: A Dataset for Detection and Tracking in Aerial Thermal Infrared Videos -> TIR videos of humans and animals with several challenging scenarios like scale variations, background clutter due to thermal reflections, large camera rotations, and motion blur
- ERA: A Dataset and Deep Learning Benchmark for Event Recognition in Aerial Videos
- The Synthinel-1 dataset: a collection of high resolution synthetic overhead imagery for building segmentation
- RarePlanes -> incorporates both real and synthetically generated satellite imagery including aircraft.
- Checkout Microsoft AirSim, which is a simulator for drones, cars and more, built on Unreal Engine
- https://www.azavea.com/projects/raster-vision/
- An open source Python framework for building computer vision models on aerial, satellite, and other large imagery sets.
- Accessible through the Raster Foundry
- Example use cases on open data
- https://github.com/mapbox/robosat
- Semantic segmentation on aerial and satellite imagery. Extracts features such as: buildings, parking lots, roads, water, clouds
- robosat-jupyter-notebook -> walks through all of the steps in an excellent blog post on the Robosat feature extraction and machine learning pipeline.
- Note there is/was fork of Robosat, originally named RoboSat.pink, and subsequently neat-EO.pink although this appears to be dead/archived
- https://github.com/trailbehind/DeepOSM
- Train a deep learning net with OpenStreetMap features and satellite imagery.
- https://github.com/nshaud/DeepNetsForEO
- Uses SegNET for working on remote sensing images using deep learning.
- https://github.com/developmentseed/skynet-data
- Data pipeline for machine learning with OpenStreetMap
This section explores the different techniques (DL, ML & classical) people are applying to common problems in satellite imagery analysis. Classification problems are the most simply addressed via DL, object detection is harder, and cloud detection harder still (niche interest). Note that almost all aerial imagery data on the internet is in RGB format, and techniques designed for working with this 3 band imagery may fail or need significant adaptation to work with multiband data (e.g. 13-band Sentinel 2).
Assign a label to an image, e.g. this is an image of a forest.
- Land classification using a simple sklearn cluster algorithm or deep learning.
- Land use is related to classification, but we are trying to detect a scene, e.g. housing, forestry. I have tried CNN -> See my notebooks
- Land Use Classification using Convolutional Neural Network in Keras
- Sea-Land segmentation using DL
- Pixel level segmentation on Azure
- Deep Learning-Based Classification of Hyperspectral Data
- A U-net based on Tensorflow for objection detection (or segmentation) of satellite images - DSTL dataset but python 2.7
- What’s growing there? Using eo-learn and fastai to identify crops from multi-spectral remote sensing data (Sentinel 2)
- FastAI Multi-label image classification
- Land use classification using Keras
- Detecting Informal Settlements from Satellite Imagery using fine-tuning of ResNet-50 classifier with repo
- Image classification of UC Merced using Keras or alternatively fastai
- Water Detection in High Resolution Satellite Images using the waterdetect python package -> The main idea is to combine water indexes (NDWI, MNDWI, etc.) with reflectance bands (NIR, SWIR, etc.) into an automated clustering process
- AutoEncoders for Land Cover Classification of Hyperspectral Images -> An autoencoder nerual net is used to reduce 103 band data to 60 features (dimensionality reduction), keras
- Contrastive Sensor Fusion -> Code implementing Contrastive Sensor Fusion, an approach for unsupervised learning of multi-sensor representations targeted at remote sensing imagery.
- Codebase for land cover classification with U-Net
- Tree species classification from from airborne LiDAR and hyperspectral data using 3D convolutional neural networks
- Multi-Label Classification of Satellite Photos of the Amazon Rainforest -> uses the Planet dataset & TF 2 & Keras
- UrbanLandUse -> This repository contains a comprehensive set of instructions for creating and applying ML models that characterize land use / land cover (LULC) in urban areas.
- hyperspectral-autoencoders -> Tools for training and using unsupervised autoencoders and supervised deep learning classifiers for hyperspectral data, built on tensorflow. Autoencoders are unsupervised neural networks that are useful for a range of applications such as unsupervised feature learning and dimensionality reduction.
- Land cover classification of Sundarbans satellite imagery using K-Nearest Neighbor(K-NNC), Support Vector Machine (SVM), and Gradient Boosting classification algorithms with Python
- hyperspectral_deeplearning_review -> Code of December 2019 paper "Deep Learning Classifiers for Hyperspectral Imaging: A Review"
Whilst classification will assign a label to a whole image, semantic segmentation will assign a label to each pixel
- Instance segmentation with keras - links to satellite examples
- Semantic Segmentation on Aerial Images using fastai
- https://github.com/Paulymorphous/Road-Segmentation
- UNSOAT used fast.ai to train a Unet to perform semantic segmentation on satellite imageries to detect water - paper + notebook, accuracy 0.97, precision 0.91, recall 0.92.
- Identification of roads and highways using Sentinel-2 imagery (10m) super-resolved using the SENX4 model up to x4 the initial spatial resolution (2.5m)
- find-unauthorized-constructions-using-aerial-photography -> U-Net & Keras
- WildFireDetection -> Using U-Net Model to Detect Wildfire from Satellite Imagery, with streamlit UI
Monitor water levels, coast lines, size of urban areas, wildfire damage. Note, clouds change often too..!
- Using PCA (python 2, requires updating) -> https://appliedmachinelearning.blog/2017/11/25/unsupervised-changed-detection-in-multi-temporal-satellite-images-using-pca-k-means-python-code/
- Using CNN -> https://github.com/vbhavank/Unstructured-change-detection-using-CNN
- Siamese neural network to detect changes in aerial images
- https://www.spaceknow.com/
- LANDSAT Time Series Analysis for Multi-temporal Land Cover Classification using Random Forest
- Change Detection in 3D: Generating Digital Elevation Models from Dove Imagery
- Change Detection in Hyperspectral Images Using Recurrent 3D Fully Convolutional Networks
- PySAR - InSAR (Interferometric Synthetic Aperture Radar) timeseries analysis in python
- QGIS 2 plugin for applying change detection algorithms on high resolution satellite imagery
- Change-Detection-Review -> A review of change detection methods, including codes and open data sets for deep learning.
Image registration is the process of transforming different sets of data into one coordinate system. Typical use is overlapping images taken at different times or with different cameras.
- Wikipedia article on registration -> register for change detection or image stitching
- Traditional approach -> define control points, employ RANSAC algorithm
- Phase correlation is used to estimate the translation between two images with sub-pixel accuracy. Can be used for accurate registration of low resolution imagery onto high resolution imagery, or to register a sub-image on a full image -> Unlike many spatial-domain algorithms, the phase correlation method is resilient to noise, occlusions, and other defects. Applied to Landsat images here
- cnn-registration -> A image registration method using convolutional neural network features written in Python2, Tensorflow 1.5
A good introduction to the challenge of performing object detection on aerial imagery is given in this paper. In summary, images are large and objects may comprise only a few pixels, easily confused with random features in background. An example task is detecting boats on the ocean, which should be simpler than land based detection owing to the relatively blank background in images, but is still challenging.
- Intro articles here and here.
- DigitalGlobe article - they use a combination classical techniques (masks, erodes) to reduce the search space (identifying water via NDWI which requires SWIR) then apply a binary DL classifier on candidate regions of interest. They deploy the final algo as a task on their GBDX platform. They propose that in the future an R-CNN may be suitable for the whole process.
- Planet use non DL felzenszwalb algorithm to detect ships
- Segmentation of buildings on kaggle
- Identifying Buildings in Satellite Images with Machine Learning and Quilt -> NDVI & edge detection via gaussian blur as features, fed to TPOT for training with labels from OpenStreetMap, modelled as a two class problem, “Buildings” and “Nature”.
- Deep learning for satellite imagery via image segmentation
- Building Extraction with YOLT2 and SpaceNet Data
- Find sports fields using Mask R-CNN and overlay on open-street-map
- Detecting solar panels from satellite imagery
- Anomaly Detection on Mars using a GAN
- Tackling the Small Object Problem in Object Detection
- Satellite Imagery Multiscale Rapid Detection with Windowed Networks (SIMRDWN) -> combines some of the leading object detection algorithms into a unified framework designed to detect objects both large and small in overhead imagery
- 2020 Nature paper - An unexpectedly large count of trees in the West African Sahara and Sahel -> tree detection framework based on U-Net & tensorflow 2 with code here
- Truck Detection with Sentinel-2 during COVID-19 crisis -> moving objects in Sentinel-2 data causes a specific reflectance relationship in the RGB, which looks like a rainbow, and serves as a marker for trucks. Improve accuracy by only analysing roads.
- Counting-Trees-using-Satellite-Images -> create an inventory of incoming and outgoing trees for an annual tree inspections, uses keras
- Several useful articles on awesome-tiny-object-detection
- DeepSolar is a deep learning framework that analyzes satellite imagery to identify the GPS locations and sizes of solar panels
- Challenges with SpaceNet 4 off-nadir satellite imagery: Look angle and target azimuth angle -> building prediction in images taken at nearly identical look angles — for example, 29 and 30 degrees — produced radically different performance scores.
- Building footprint detection with fastai on the challenging SpaceNet7 dataset
- YOLTv4 -> YOLTv4 is designed to detect objects in aerial or satellite imagery in arbitrarily large images that far exceed the ~600×600 pixel size typically ingested by deep learning object detection frameworks.
- Official repository for the "Identifying trees on satellite images" challenge from Omdena
- DeepForest is a python package for training and predicting individual tree crowns from airborne RGB imagery
- From this article on sentinelhub there are three popular classical algorithms that detects thresholds in multiple bands in order to identify clouds. In the same article they propose using semantic segmentation combined with a CNN for a cloud classifier (excellent review paper here), but state that this requires too much compute resources.
- This article compares a number of ML algorithms, random forests, stochastic gradient descent, support vector machines, Bayesian method.
- Segmentation of Clouds in Satellite Images Using Deep Learning -> a U-Net is employed to interpret and extract the information embedded in the satellite images in a multi-channel fashion, and finally output a pixel-wise mask indicating the existence of cloud.
The goal is to predict economic activity from satellite imagery rather than conducting labour intensive ground surveys
- Using publicly available satellite imagery and deep learning to understand economic well-being in Africa, Nature Comms 22 May 2020 -> Used CNN on Ladsat imagery (night & day) to predict asset wealth of African villages
- Combining Satellite Imagery and machine learning to predict poverty -> review article
- Measuring Human and Economic Activity from Satellite Imagery to Support City-Scale Decision-Making during COVID-19 Pandemic
- Predicting Food Security Outcomes Using CNNs for Satellite Tasking
- Crop yield Prediction with Deep Learning -> The necessary code for the paper Deep Gaussian Process for Crop Yield Prediction Based on Remote Sensing Data, AAAI 2017 (Best Student Paper Award in Computational Sustainability Track).
- https://github.com/taspinar/sidl/blob/master/notebooks/2_Detecting_road_and_roadtypes_in_sattelite_images.ipynb
- Measuring the Impacts of Poverty Alleviation Programs with Satellite Imagery and Deep Learning
- Traffic density estimation as a regression problem
- Crop Yield Prediction Using Deep Neural Networks and LSTM and Building a Crop Yield Prediction App in Senegal Using Satellite Imagery and Jupyter
- Advanced Deep Learning Techniques for Predicting Maize Crop Yield using Sentinel-2 Satellite Imagery
Super-resolution imaging is a class of techniques that enhance the resolution of an imaging system. Very hot topic of research.
- https://medium.com/the-downlinq/super-resolution-on-satellite-imagery-using-deep-learning-part-1-ec5c5cd3cd2 -> Nov 2016 blog post by CosmiQ Works with a nice introduction to the topic. Proposes and demonstrates a new architecture with perturbation layers with practical guidance on the methodology and code. Three part series
- Super Resolution for Satellite Imagery - srcnn repo
- TensorFlow implementation of "Accurate Image Super-Resolution Using Very Deep Convolutional Networks" adapted for working with geospatial data
- Random Forest Super-Resolution (RFSR repo) including sample data
- Super-Resolution (python) Utilities for managing large satellite images
- Enhancing Sentinel 2 images by combining Deep Image Prior and Decrappify. Repo for deep-image-prior and article on decrappify
Image fusion of low res multispectral with high res pan band.
- Several algorithms described in the ArcGIS docs, with the simplest being taking the mean of the pan and RGB pixel value.
- Does not require DL, classical algos suffice, see this notebook and this kaggle kernel
- https://github.com/mapbox/rio-pansharpen
Generative Adversarial Networks, or GANS, can be used to translate images, e.g. from SAR to RGB.
- How to Develop a Pix2Pix GAN for Image-to-Image Translation -> how to develop a Pix2Pix model for translating satellite photographs to Google map images. A good intro to GANS
- SAR to RGB Translation using CycleGAN -> uses a CycleGAN model in the ArcGIS API for Python
Measure surface contours.
- Wikipedia DEM article and phase correlation article
- Intro to depth from stereo
- Map terrain from stereo images to produce a digital elevation model (DEM) -> high resolution & paired images required, typically 0.3 m, e.g. Worldview or GeoEye.
- Process of creating a DEM here and here.
- ArcGIS can generate DEMs from stereo images
- https://github.com/MISS3D/s2p -> produces elevation models from images taken by high resolution optical satellites -> demo code on https://gfacciol.github.io/IS18/
- Automatic 3D Reconstruction from Multi-Date Satellite Images
- Semi-global matching with neural networks
- Predict the fate of glaciers
- monodepth - Unsupervised single image depth prediction with CNNs
- Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches
- Terrain and hydrological analysis based on LiDAR-derived digital elevation models (DEM) - Python package
- Phase correlation in scikit-image
- s2p -> a Python library and command line tool that implements a stereo pipeline which produces elevation models from images taken by high resolution optical satellites such as Pléiades, WorldView, QuickBird, Spot or Ikonos
- The Mapbox API provides images and elevation maps, article here
- Simple band math
ndvi = np.true_divide((ir - r), (ir + r))
but challenging due to the size of the imagery. - Example notebook local
- Landsat data in cloud optimised (COG) format analysed for NVDI with medium article here.
- Visualise water loss with Holoviews
- Removing speckle noise from Sentinel-1 SAR using a CNN
- A dataset which is specifically made for deep learning on SAR and optical imagery is the SEN1-2 dataset, which contains corresponding patch pairs of Sentinel 1 (VV) and 2 (RGB) data. It is the largest manually curated dataset of S1 and S2 products, with corresponding labels for land use/land cover mapping, SAR-optical fusion, segmentation and classification tasks. Data: https://mediatum.ub.tum.de/1474000
- so2sat on Tensorflow datasets - So2Sat LCZ42 is a dataset consisting of co-registered synthetic aperture radar and multispectral optical image patches acquired by the Sentinel-1 and Sentinel-2 remote sensing satellites, and the corresponding local climate zones (LCZ) label. The dataset is distributed over 42 cities across different continents and cultural regions of the world.
- dinoSAR -> tools for InSAR processing on AWS, currently works with Sentinel-1 data to create Cloud-Optimized Geotiffs with accompanying STAC metadata.
- 4-ways-to-improve-class-imbalance discusses the pros and cons of several rebalancing techniques, applied to an aerial dataset. Reason to read: models can reach an accuracy ceiling where majority classes are easily predicted but minority classes poorly predicted. Overall model accuracy may not improve until steps are taken to account for class imbalance.
- GeoServer -> an open source server for sharing geospatial data
- Open Data Cube - serve up cubes of data https://www.opendatacube.org/
- https://terria.io/ for pretty catalogues
- Remote pixel
- Sentinel-hub eo-browser
- Large datasets may come in HDF5 format, can view with -> https://www.hdfgroup.org/downloads/hdfview/
- Climate data is often in netcdf format, which can be opened using xarray
- The xarray docs list a number of ways that data can be stored and loaded.
- TileDB -> a 'Universal Data Engine' to store, analyze and share any data (beyond tables), with any API or tool (beyond SQL) at planet-scale (beyond clusters), open source and managed options. Recently hiring to work with xarray, dask, netCDF and cloud native storage
- BigVector database -> A fully-managed, highly-scalable, and cost-effective database for vectors. Vectorize structured data or orbital imagery and discover new insights
- Read about Serverless PostGIS on AWS Aurora
- Hub -> The fastest way to store, access & manage datasets with version-control for PyTorch/TensorFlow. Works locally or on any cloud. Read Faster Machine Learning Using Hub by Activeloop: A code walkthrough of using the hub package for satellite imagery
- A Comparison of Spatial Functions: PostGIS, Athena, PrestoDB, BigQuery vs RedShift
- https://www.cogeo.org/
- TLDR: A Cloud Optimized GeoTIFF (COG) is a regular GeoTIFF file, aimed at being hosted on a HTTP file server (or Cloud object storage like S3), with an internal organization that enables more efficient workflows on the cloud. In particular they support HTTP range requests, enabling downloading of specific tiles rather than the full file. COGs work normally in GIS software such as QGIS.
- Intro presentation from Saheel Ahmed
- cog-best-practices
- rio-cogeo -> Cloud Optimized GeoTIFF (COG) creation and validation plugin for Rasterio.
- aiocogeo -> Asynchronous cogeotiff reader (python asyncio)
- Landsat data in cloud optimised (COG) format analysed for NVDI with medium article Cloud Native Geoprocessing of Earth Observation Satellite Data with Pangeo.
- Working with COGS and STAC in python using geemap
- Load, Experiment, and Download Cloud Optimized Geotiffs (COG) using Python with Google Colab -> short read which covers finding COGS, opening with Rasterio and doing some basic manipulations, all in a Colab Notebook.
- Exploring USGS Terrain Data in COG format using hvPlot -> local COG from public AWS bucket, open with rioxarray, visualise with hvplot. See the Jupyter notebook
- aws-lambda-docker-rasterio -> AWS Lambda Container Image with Python Rasterio for querying Cloud Optimised GeoTiffs. See this presentation
- cogbeam -> a python based Apache Beam pipeline, optimized for Google Cloud Dataflow, which aims to expedite the conversion of traditional GeoTIFFs into COGs
- Convert GOES-R data to Cloud Optimized Geotiffs -> an AWS Lambda for converting GOES-16 & GOES-17 L2 ABI Data to Cloud Optimized Geotiff using GDAL.
The STAC specification provides a common metadata specification, API, and catalog format to describe geospatial assets, so they can more easily indexed and discovered. The aim is that the catalogue is crawlable so it can be indexed by a search engine and make imagery discoverable, without requiring yet another API interface. A good place to start is to view the Planet Disaster Data catalogue which has the catalogue source on Github and uses the stac-browser
- Spec at https://github.com/radiantearth/stac-spec
- SpatioTemporal Asset Catalog API specification -> an API to make geospatial assets openly searchable and crawlable
- stacindex -> STAC Catalogs, Collections, APIs, Software and Tools
- Talk at https://docs.google.com/presentation/d/1O6W0lMeXyUtPLl-k30WPJIyH1ecqrcWk29Np3bi6rl0/edit#slide=id.p
- Chat https://gitter.im/SpatioTemporal-Asset-Catalog/Lobby
- Several useful repos on https://github.com/sat-utils
- Intake-STAC -> Intake-STAC provides an opinionated way for users to load Assets from STAC catalogs into the scientific Python ecosystem. It uses the intake-xarray plugin and supports several file formats including GeoTIFF, netCDF, GRIB, and OpenDAP.
- sat-utils/sat-search -> Sat-search is a Python 3 library and a command line tool for discovering and downloading publicly available satellite imagery using STAC compliant API
- franklin -> A STAC/OGC API Features Web Service focused on ease-of-use for end-users.
- stacframes -> A Python library for working with STAC Catalogs via Pandas DataFrames
- sat-api-pg -> A Postgres backed STAC API
- stactools -> Command line utility and Python library for STAC
- pystac -> Python library for working with any SpatioTemporal Asset Catalog (STAC)
- STAC Examples for Nightlights data -> minimal example STAC implementation for the Light Every Night dataset of all VIIRS DNB and DMSP-OLS nighttime satellite data
- stac-fields -> A minimal STAC library that contains a list of STAC fields with some metadata (title, unit, prefix) and helper functions
- stackstac -> Turn a STAC catalog into a dask-based xarray
What are companies doing?
- Overall trend to using cloud (i.e. AWS, Google or Azure) storage buckets for hosting imagery
- A serverless pipeline appears to be where companies are headed for routine compute tasks and even storage, whilst providing a Jupyter notebook approach for custom analysis. Checkout process Satellite data using AWS Lambda functions
- Traditional data formats aren't designed for processing, so new standards are developing such as COGS
- Google provide training on how to use Apache Spark on Google Cloud Dataproc to distribute a computationally intensive (satellite) image processing task onto a cluster of machines -> https://google.qwiklabs.com/focuses/5834?parent=catalog
- Read about Planet on Google and also how Airbus use Google as the backend for their OneAtlas data portal
- This article discusses some of the available platforms
- Pangeo -> There is no single software package called “pangeo”; rather, the Pangeo project serves as a coordination point between scientists, software, and computing infrastructure. Includes open source resources for parallel processing using Dask and Xarray. Pangeo recently announced their 2.0 goals: pivoting away from directly operating cloud-based JupyterHubs, and towards eductaion and research
- Airbus Sandbox -> will provide access to imagery
- Descartes Labs -> access to EO imagery from a variety of providers via python API
- DigitalGlobe have a cloud hosted Jupyter notebook platform called GBDX. Cloud hosting means they can guarantee the infrastructure supports their algorithms, and they appear to be close/closer to deploying DL.
- Planet have a Jupyter notebook platform which can be deployed locally.
- jupyteo.com -> hosted Jupyter environment with many features for working with EO data
- eurodatacube.com -> data & platform for EO analytics in Jupyter env, paid
- Unfolded Studio -> next generation geospatial analytics and visualization platform building on open source geospatial technologies including kepler.gl, deck.gl and H3
- up42 is a developer platform and marketplace, offering all the building blocks for powerful, scalable geospatial products
Generally a GPU is required for DL, and this section lists a couple of free Jupyter environments with GPU available. There is a good overview of online Jupyter development environments on the fast.ai site. I personally use Colab with data hosted on Google Drive
- Collaboratory notebooks with GPU as a backend for free for 12 hours at a time. Note that the GPU may be shared with other users, so if you aren't getting good performance try reloading.
- Also a pro tier for $10 a month -> https://colab.research.google.com/signup
- Tensorflow pytorch can be installed
- Free to use
- GPU Kernels - may run for 1 hour
- Tensorflow, pytorch & fast.ai available
- Advantage that many datasets are already available
- Free tier available
- https://docs.paperspace.com/gradient/instances/free-instances
Once you have a trained model how do you expose it to the internet and other services? Usually through a rest API. This section lists a number of training and hosting options. For an overview on this topic checkout Practical-Deep-Learning-on-the-Cloud
A conceptually simple and scalable approach to serving up deep learning model inference code is to wrap it in a rest API that is implemented in python (typically using flask or FastAPI) and deploy it to a lambda function.
- Basic API: https://blog.keras.io/building-a-simple-keras-deep-learning-rest-api.html with code here
- Advanced API with request queuing: https://www.pyimagesearch.com/2018/01/29/scalable-keras-deep-learning-rest-api/
TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. Multiple models, or indeed multiple versions of the same model, can be served simultaneously. TensorFlow Serving comes with a scheduler that groups individual inference requests into batches for joint execution on a GPU
- Quickread: TerrAvion Uses AWS to Help Farmers Improve Crop Yields Through High-Resolution Aerial Images
- Sagemaker is a hosted Jupyter environment with easy deployment of models. Read bring-your-own-deep-learning-framework-to-amazon-sagemaker-with-model-server-for-apache-mxnet. I have personally found the Sagemaker UI to be very buggy, and have switched to using deep learning AMIs. These are just EC2 instances with deep learning frameworks preinstalled, and provide the same Jupyter environmnet as Sagemaker. They do require more setup from the user but in return allow access to the underlying hardware, which makes debugging issues much more straightforward. There is a good guide to setting up your AMI instance on the Keras blog
- Rekognition custom labels is a 'code free' platform that includes tools for annotating data and performing training and inferencing. Read Training models using Satellite (Sentinel-2) imagery on Amazon Rekognition Custom Labels and see the repo
- Lambda functions are stateless functions which can be run at scale for low cost, read cutting-costs-with-aws-lambda-for-highly-scalable-image-processing. Limited run time and storage. For state management combine with AWS Step functions. GeoLambda provides public Docker images and AWS Lambda Layers containing common geospatial native libraries. GeoLambda contains the libraries for GDAL, Proj, GEOS, GeoTIFF, HDF4/5, SZIP, NetCDF, OpenJPEG, WEBP, ZSTD, and others. Alternative example dockerfile showing install of GDAL.
- Batch is suitable for longer running tasks, deploy as docker containers, typically hosting a long running python script, or even parameterize and run jupyter notebooks on batch using Papermill.
- https://github.com/developmentseed/chip-n-scale-queue-arranger
- an orchestration pipeline for running machine learning inference at scale
- Supports fast.ai models
- ArcGIS -> mapping and analytics software, with both local and cloud hosted options. Checkout Geospatial deep learning with arcgis.learn. It appears ArcGIS are using fastai for their deep learning backend. ArcGIS Jupyter Notebooks in ArcGIS Enterprise are built to run big data analysis, deep learning models, and dynamic visualization tools.
A note on licensing: The two general types of licenses for open source are copyleft and permissive. Copyleft requires that subsequent derived software products also carry the license forward, e.g. the GNU Public License (GNU GPLv3). For permissive, options to modify and use the code as one please are more open, e.g. MIT & Apache 2. Checkout choosealicense.com/
- QGIS- Create, edit, visualise, analyse and publish geospatial information. Python scripting and plugins. Open source alternative to ArcGIS.
- Orfeo toolbox - remote sensing toolbox with python API (just a wrapper to the C code). Do activites such as pansharpening, ortho-rectification, image registration, image segmentation & classification. Not much documentation.
- QUICK TERRAIN READER - view DEMS, Windows
- dl-satellite-docker -> docker files for geospatial analysis, including tensorflow, pytorch, gdal, xgboost...
- AIDE V2 - Tools for detecting wildlife in aerial images using active learning
- Land Cover Mapping web app from Microsoft
- Solaris -> An open source ML pipeline for overhead imagery by CosmiQ Works, similar to Rastervision but with some unique very vool features
- openSAR -> Synthetic Aperture Radar (SAR) Tools and Documents from Earth Big Data LLC (http://earthbigdata.com/)
- terrascope viewer for browsing Sentinel imagery on a map
- qhub -> QHub enables teams to build and maintain a cost effective and scalable compute/data science platform in the cloud.
- imagej -> a very versatile image viewer and processing program
- Geo Data Viewer extension for VSCode which enables opening and viewing various geo data formats with nice visualisations
- Datasette is a tool for exploring and publishing data as an interactive website and accompanying API, with SQLite backend. Various plugins extend its functionality, for example to allow displaying geospatial info, render images (useful for thumbnails), and add user authentication.
- Photoprism is a privately hosted app for browsing, organizing, and sharing your photo collection, with support for tiffs
- dbeaver is a free universal database tool and SQL client with geospatial features
- Grafana can be used to make interactive dashboards, checkout this example showing Point data. Note there is an AWS managed service for Grafana
- So improtant this pair gets their own section. GDAL is THE command line tool for reading and writing raster and vector geospatial data formats. If you are using python you will probably want to use Rasterio which provides a pythonic wrapper for GDAL
- GDAL and on twitter
- GDAL is a dependency of Rasterio and can be difficult to build and install. I recommend using conda, brew (on OSX) or docker in these situations
- GDAL docker quickstart:
docker pull osgeo/gdal
thendocker run --rm -v $(pwd):/data/ osgeo/gdal gdalinfo /data/cog.tiff
- Even Rouault maintains GDAL, please consider sponsoring him
- Rasterio -> reads and writes GeoTIFF and other raster formats and provides a Python API based on Numpy N-dimensional arrays and GeoJSON. There are a variety of plugins that extend Rasterio functionality.
- rio-cogeo -> Cloud Optimized GeoTIFF (COG) creation and validation plugin for Rasterio.
- rioxarray -> geospatial xarray extension powered by rasterio
- aws-lambda-docker-rasterio -> AWS Lambda Container Image with Python Rasterio for querying Cloud Optimised GeoTiffs. See this presentation
- godal -> golang wrapper for GDAL
- Write rasterio to xarray
- Dask works with your favorite PyData libraries to provide performance at scale for the tools you love -> checkout Read and manipulate tiled GeoTIFF datasets and accelerating-science-dask. Coiled is a managed Dask service.
- xarray -> N-D labeled arrays and datasets. Read Handling multi-temporal satellite images with Xarray. Checkout xarray_leaflet for tiled map plotting
- xarray-spatial -> Fast, Accurate Python library for Raster Operations. Implements algorithms using Numba and Dask, free of GDAL
- Geowombat -> geo-utilities applied to air- and space-borne imagery, uses Rasterio, Xarray and Dask for I/O and distributed computing with named coordinates
- NumpyTiles -> a specification for providing multiband full-bit depth raster data in the browser
- Zarr -> Zarr is a format for the storage of chunked, compressed, N-dimensional arrays. Zarr depends on NumPy
- gcsts for google cloud storage sile-system -> Pythonic file-system interface for Google Cloud Storage
- satpy - a python library for reading and manipulating meteorological remote sensing data and writing it to various image and data file formats
- geemap: A Python package for interactive mapping with Google Earth Engine, ipyleaflet, and ipywidgets. See the Landsat timelapse example
- WaterDetect -> an end-to-end algorithm to generate open water cover mask, specially conceived for L2A Sentinel 2 imagery. It can also be used for Landsat 8 images and for other multispectral clustering/segmentation tasks.
- DeepHyperX -> A Python/pytorch tool to perform deep learning experiments on various hyperspectral datasets.
- landsat_ingestor -> Scripts and other artifacts for landsat data ingestion into Amazon public hosting
- PyShp -> The Python Shapefile Library (PyShp) reads and writes ESRI Shapefiles in pure Python
- s2p -> a Python library and command line tool that implements a stereo pipeline which produces elevation models from images taken by high resolution optical satellites such as Pléiades, WorldView, QuickBird, Spot or Ikonos
- TorchSat is an open-source deep learning framework for satellite imagery analysis based on PyTorch.
- torchvision-enhance -> Enhance PyTorch vision for semantic segmentation, multi-channel images and TIF file,...
- felicette -> Satellite imagery for dummies. Generate JPEG earth imagery from coordinates/location name with publicly available satellite data.
- EarthPy -> A set of helper functions to make working with spatial data in open source tools easier. readExploratory Data Analysis (EDA) on Satellite Imagery Using EarthPy
- detectree -> Tree detection from aerial imagery
- pylandstats -> compute landscape metrics
- ipyearth -> An IPython Widget for Earth Maps
- arosics -> Perform automatic subpixel co-registration of two satellite image datasets based on an image matching approach
- pygeometa -> provides a lightweight and Pythonic approach for users to easily create geospatial metadata in standards-based formats using simple configuration files
- pesto -> PESTO is designed to ease the process of packaging a Python algorithm as a processing web service into a docker image. It contains shell tools to generate all the boiler plate to build an OpenAPI processing web service compliant with the Geoprocessing-API. By Airbus Defence And Space
- folium -> a python wrapper to the excellent leaflet.js which makes it easy to visualize data that’s been manipulated in Python on an interactive leaflet map. Also checkout the streamlit-folium component for adding folium maps to your streamlit apps
- GEOS -> Google Earth Overlay Server (GEOS) is a python-based server for creating Google Earth overlays of tiled maps. Your can also display maps in the web browser, measure distances and print maps as high-quality PDF’s.
- GeoDjango intends to be a world-class geographic Web framework. Its goal is to make it as easy as possible to build GIS Web applications and harness the power of spatially enabled data. Some features of GDAL are supported.
- tifffile -> Read and write TIFF files
- xtiff -> A small Python 3 library for writing multi-channel TIFF stacks
- dask-geopandas -> Parallel GeoPandas with Dask
- geotiff -> A noGDAL tool for reading and writing geotiff files
- rasterstats -> summarize geospatial raster datasets based on vector geometries
- turfpy -> a Python library for performing geospatial data analysis which reimplements turf.js
- GatorSense Hyperspectral Image Analysis Toolkit -> This repo contains algorithms for Anomaly Detectors, Classifiers, Dimensionality Reduction, Endmember Extraction, Signature Detectors, Spectral Indices
- hvplot -> A high-level plotting API for the PyData ecosystem built on HoloViews. Allows overlaying data on map tiles, see Exploring USGS Terrain Data in COG format using hvPlot
- Pyviz examples include several interesting geospatial visualisations
- napari -> napari is a fast, interactive, multi-dimensional image viewer for Python. It’s designed for browsing, annotating, and analyzing large multi-dimensional images. By integrating closely with the Python ecosystem, napari can be easily coupled to leading machine learning and image analysis tools. Example viewing Landsat-8 imagery. Note that to view a 3GB COG I had to install the napari-tifffile-reader plugin.
- pixel-adjust -> Interactively select and adjust specific pixels or regions within a single-band raster. Built with rasterio, matplotlib, and panel.
- Plotly Dash can be used for making interactive dashboards
If you are performing object detection you will need to annotate images with bounding boxes. Check that your annotation tool of choice supports large image (likely geotiff) files, as not all will. Note that GeoJSON is widely used by remote sensing researchers but this annotation format is not commonly supported in general computer vision frameworks, and in practice you may have to convert the annotation format to use the data with your chosen framework. There are both closed and open source tools for creating and converting annotation formats.
- A long list of tools is here
- Labelme Image Annotation for Geotiffs -> uses Labelme
- Label Maker -> downloads OpenStreetMap QA Tile information and satellite imagery tiles and saves them as an
.npz
file for use in machine learning training. - CVAT is worth investigating, and have an open issue to support large TIFF files. This article on Roboflow gives a good intro to CVAT.
- Deep Block is a general purpose AI platform that includes a tool for COCOJSON export for aerial imagery. Checkout this video
- AWS supports image annotation via the Rekognition Custom Labels console
- Roboflow can be used to convert between annotation formats
- Other annotation tools include supervise.ly (web UI), rectlabel (OSX desktop app) and VoTT
- Label Studio is a multi-type data labeling and annotation tool with standardized output format
- Deeplabel is a cross-platform tool for annotating images with labelled bounding boxes. Deeplabel also supports running inference using state-of-the-art object detection models like Faster-RCNN and YOLOv4. With support out-of-the-box for CUDA, you can quickly label an entire dataset using an existing model.
- Alturos.ImageAnnotation is a collaborative tool for labeling image data on S3 for yolo
- rectlabel is a desktop app for MacOS to label images for bounding box object detection and segmentation
- Adam Van Etten is doing interesting things in object detection and segmentation
- Andrew Cutts cohosts the Scene From Above podcast and has many interesting repos
- Ankit Kariryaa published a recent nature paper on tree detection
- Chris Holmes is doing great things at Planet
- Christoph Rieke maintains a very popular imagery repo and has published his thesis on segmentation
- Even Rouault maintains several of the most critical tools in this domain such as GDAL, please consider sponsoring him
- Jake Shermeyer many interesting repos
- Mort Canty is an expert in change detection
- Nicholas Murray is an Australia-based scientist with a focus on delivering the science necessary to inform large scale environmental management and conservation
- Qiusheng Wu is an Assistant Professor in the Department of Geography at the University of Tennessee
- Robin Wilson is a former academic who is very active in the satellite imagery space
For a full list of companies, on and off Github, checkout awesome-geospatial-companies. The following lists companies with interesting Github profiles.
- Airbus Defence And Space
- Azavea -> lots of interesting repos around STAC
- Development Seed
- Descartes Labs
- Digital Globe
- Mapbox -> thanks for Rasterio!
- Planet Labs -> thanks for COGS!
- Manning: Monitoring Changes in Surface Water Using Satellite Image Data
- Automating GIS processes includes a lesson on automating raster data processing
- Image Analysis, Classification and Change Detection in Remote Sensing With Algorithms for Python, Fourth Edition, By Morton John Canty -> code here
- Pangeo discourse lists multiple jobs, global
Processing on satellite allows less data to be downlinked. E.g. super-resolution image might take 4-8 images to generate, then a single image is downlinked.
- Lockheed Martin and USC to Launch Jetson-Based Nanosatellite for Scientific Research Into Orbit - Aug 2020 - One app that will run on the GPU-accelerated satellite is SuperRes, an AI-based application developed by Lockheed Martin, that can automatically enhance the quality of an image.
- Intel to place movidius in orbit to filter images of clouds at source - Oct 2020 - Getting rid of these images before they’re even transmitted means that the satellite can actually realize a bandwidth savings of up to 30%
- Whilst not involving neural nets the PyCubed project gets a mention here as it is putting python on space hardware such as the V-R3x
My background is in optical physics, and I hold a PhD from Cambridge on the topic of localised surface Plasmons. Since academia I have held a variety of roles, including doing research at Sharp Labs Europe, developing optical systems at Surrey Satellites (SSTL), and working at an IOT startup. It was whilst at SSTL that I started this repository as a personal resource. Over time I have steadily gravitated towards data analytics and software engineering with python, and I now work as a senior data scientist at Satellite Vu. Please feel free to connect with me on Twitter & LinkedIn, and please do let me know if this repository is useful to your work.