Accurate retinal layer segmentation, especially the peripapillary retinal nerve fiber layer (RNFL) is critical for the diagnosis of ophthalmic diseases. However, due to the complex morphologies of the peripapillary region, most of the existing methods focus on segmenting the macular region and could not be directly applied to the peripapillary retinal optical coherence tomography (OCT) images. In this paper, we propose a novel graph convolutional network (GCN)-assisted segmentation framework based on a U-shape neural network for peripapillary retinal layer segmentation in OCT images. We argue that the strictly stratified structure of retina layers in addition to the centered optic disc is an ideal objective for GCN. Specifically, a graph reasoning block is inserted between the encoder and decoder of the U-shape neural network to conduct spatial reasoning. In this way, the peripapillary retina in OCT images is segmented into nine layers including RNFL. The proposed method was trained and tested on our collected dataset of peripapillary retinal OCT images. Experimental results showed that our segmentation method outperformed other state-of-the-art methods. In particular, compared with ReLayNet, the average and RNFL Dice coefficients are improved by 1.2% and 2.6%, respectively.
KEYWORDS: Optical coherence tomography, Image segmentation, Global system for mobile communications, Retina, Eye, Image fusion, Visualization, Convolution, Ophthalmology, Network architectures
The choroid is an important structure of the eye and choroid thickness distribution estimated from optical coherence tomography (OCT) images plays a vital role in analysis of many retinal diseases. This paper proposes a novel group-wise attention fusion network (referred to as GAF-Net) to segment the choroid layer, which can effectively work for both normal and pathological myopia retina. Currently, most networks perform unified processing of all feature maps in the same layer, which leads to not satisfactory choroid segmentation results. In order to improve this , GAF-Net proposes a group-wise channel module (GCM) and a group-wise spatial module (GSM) to fuse group-wise information. The GCM uses channel information to guide the fusion of group-wise context information, while the GSM uses spatial information to guide the fusion of group-wise context information. Furthermore, we adopt a joint loss to solve the problem of data imbalance and the uneven choroid target area. Experimental evaluations on a dataset composed of 1650 clinically obtained B-scans show that the proposed GAF-Net can achieve a Dice similarity coefficient of 95.21±0.73%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.