Toward mapping flood impacts in urban areas using multi-sensor satellite data fusion
Abstract
Floods are increasingly common in urban areas, where more than half the global population resides, and rapid urbanization has resulted in higher exposure of people and infrastructure to the impacts of flooding. Advances in the spatial and temporal resolution of remote sensing satellites, deep learning techniques capable of extracting detailed spatial features, and cloud computing offer an opportunity to tackle the challenge of urban flood monitoring. In this research, we are developing novel sensor fusion techniques to take advantage of the expanding availability of sensors with different spatial and temporal resolutions to monitor urban flooding. To do so, we combine public optical (MODIS, Landsat, Sentinel-2), radar (Sentinel-1), passive microwave (AMSR, SMAP) imagery and commercial data from PlanetScope with the eventual goal of generating 30-meter per-pixel probabilities of inundation in urban areas globally every 3 days. We are developing two models, i) a convolutional network that uses multi-source satellite imagery at their native resolutions and ii) a generative adversarial network (GAN) that downscales and fuses multi-source satellite imagery to predict per-pixel inundation probability maps. Additional model inputs include flood conditioning factors such as elevation and precipitation. Training and validation data are labeled annotations from PlanetScope imagery of urban flood inundation from events at ten study sites from around the world that range in type of flooding, urban density and biome. Here we will present results from initial model experiments, comparing the value of segmenting water from multiple satellites by adding in multiple sensors at their native spatial resolutions (ranging from 10m to 25km) versus first downscaling input data to a common 30m resolution to segment flood maps.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2021
- Bibcode:
- 2021AGUFM.B15I1537F