Pseudo-real-time retinal layer segmentation for high-resolution adaptive optics optical coherence tomography
Abstract
We present a pseudo-real-time retinal layer segmentation for high-resolution Sensorless Adaptive Optics-Optical Coherence Tomography (SAO-OCT). Our pseudo-real-time segmentation method is based on Dijkstra's algorithm that uses the intensity of pixels and the vertical gradient of the image to find the minimum cost in a geometric graph formulation within a limited search region. It segments six retinal layer boundaries in an iterative process according to their order of prominence. The segmentation time is strongly correlated to the number of retinal layers to be segmented. Our program permits en face images to be extracted during data acquisition to guide the depth specific focus control and depth dependent aberration correction for high-resolution SAO-OCT systems. The average processing times for our entire pipeline for segmenting six layers in a retinal B-scan of 496x400 pixels and 240x400 pixels are around 25.60 ms and 13.76 ms, respectively. When reducing the number of layers segmented to only two layers, the time required for a 240x400 pixel image is 8.26 ms.
- Publication:
-
arXiv e-prints
- Pub Date:
- April 2020
- DOI:
- 10.48550/arXiv.2004.05264
- arXiv:
- arXiv:2004.05264
- Bibcode:
- 2020arXiv200405264J
- Keywords:
-
- Electrical Engineering and Systems Science - Image and Video Processing