A Convolutional Neural Network Approach to Segmenting Smallholder Agriculture
Abstract
From 2015-2016, the proportion of undernourished in sub-Saharan Africa increased, reversing a decades long trend and stressing an already insufficient food supply. In the context of persistent threats to food security - including drought, climate change, and population growth - it is imperative to locate where smallholders are farming to understand how much cropland there is, what is being farmed, and whether such farming is productive. Yet, the resolution of publicly available satellite imagery is not sufficient to map individual smallholder fields, which are typically under 2 hectares. Furthermore, variable field size, shape, and image noise make delineating smallholder fields with remote sensing a difficult problem. Recent work by Debats et al. 2016 demonstrated that a random forest classifier applied to 8 Worldview-2 scenes was highly effective in discriminating rainfed subsistence, rainfed commercial, and center pivot agriculture (Mean AUC values for each class >.90). These results demonstrated that multi-temporal inputs were more important than spectral bands beyond RGB and NIR. But limitations exist; the best random forest model returned prediction accuracies substantially below human-level perception for 2 complex scenes, with lower AUC values (.82 and .83). And in general, random forest requires useful features (image transformations) to first be extracted manually, rather than learned in an optimized fashion. In contrast, convolutional neural network (CNN) approaches are engineered to learn features and return segmented categories. Their superiority for delineating objects in digital photography is well documented in the computer vision literature, but these methods are only just beginning to be applied to fine-scale land cover mapping. We compare the segmented results of MaskRCNN (an object segmentation algorithm using CNNs), Debats et. al's 2016 method plus additional segmentation procedures, and an ensemble of both models to map smallholder agriculture in Southern Africa. We test on Worldview-2 and Planet imagery, a cost effective, daily image source. Ultimately we show which method and image source is superior for distinguishing smallholder fields. Map products generated from these methods will allow for synoptic analyses of field size, number, and farming activity through time.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2018
- Bibcode:
- 2018AGUFM.B31I2597A
- Keywords:
-
- 1632 Land cover change;
- GLOBAL CHANGEDE: 1640 Remote sensing;
- GLOBAL CHANGEDE: 1855 Remote sensing;
- HYDROLOGYDE: 1942 Machine learning;
- INFORMATICS