Optimization of the Scale Invariant Feature Transform (SIFT) for UAV Photogrammetry Snow Depth Mapping
Abstract
Unpiloted aerial vehicle (UAV) photogrammetry provides a low-cost method of observing snow depth and snow-covered area at high spatial and temporal resolution. Elevation models are derived from overlapping UAV imagery through a workflow that combines image alignment, camera pose estimation, and triangulation. The photogrammetric workflow is referred to as structure from motion, as 3D structure is estimated by scanning over a scene with overlapping 2D images. The implementation of the SIFT algorithm is an important initial step in the structure from motion workflow. In order to estimate camera pose and align overlapping images, it is required to identify key points that are common amongst images with varying perspectives and scales. Although key points can be identified manually, it would be impracticable for larger datasets and therefore serval automated algorithmic approaches have been proposed, including Histogram of Oriented Gradient (HOG), Speeded-Up Robust Features (SURF), and the most implemented being the Scale Invariant Feature Transform (SIFT). The parameters of the SIFT algorithm were initially refined in reference to an image library comprising mostly of manmade objects. Here, we optimize the SIFT parameters to achieve increased detection/matching of key points with respect to images collected over snow covered terrain.
- Publication:
-
AGU Fall Meeting Abstracts
- Pub Date:
- December 2021
- Bibcode:
- 2021AGUFM.C45A0987H