UTHealth - Fundus and Synthetic OCT-A Dataset (UT-FSOCTA)
Creators
- 1. University of Texas Health Science Center at Houston
- 2. University of Wisconsis-Madison
Description
Introduction
Vessel segmentation in fundus images is essential in the diagnosis and prognosis of retinal diseases and the identification of image-based biomarkers. However, creating a vessel segmentation map can be a tedious and time consuming process, requiring careful delineation of the vasculature, which is especially hard for microcapillary plexi in fundus images. Optical coherence tomography angiography (OCT-A) is a relatively novel modality visualizing blood flow and microcapillary plexi not clearly observed in fundus photography. Unfortunately, current commercial OCT-A cameras have various limitations due to their complex optics making them more expensive, less portable, and with a reduced field of view (FOV) compared to fundus cameras. Moreover, the vast majority of population health data collection efforts do not include OCT-A data.
We believe that strategies able to map fundus images to en-face OCT-A can create precise vascular vessel segmentation with less effort.
In this dataset, called UTHealth - Fundus and Synthetic OCT-A Dataset (UT-FSOCTA), we include fundus images and en-face OCT-A images for 112 subjects. The two modalities have been manually aligned to allow for training of medical imaging machine learning pipelines. This dataset is accompanied by a manuscript that describes an approach to generate fundus vessel segmentations using OCT-A for training (Coronado et al., 2022). We refer to this approach as "Synthetic OCT-A".
Fundus Imaging
We include 45 degree macula centered fundus images that cover both macula and optic disc. All images were acquired using a OptoVue iVue fundus camera without pupil dilation.
The full images are available at the fov45/fundus
directory. In addition, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/fundus/disc
and cropped/fundus/macula
.
Enface OCT-A
We include the en-face OCT-A images of the superficial capillary plexus. All images were acquired using an OptoVue Avanti OCT camera with OCT-A reconstruction software (AngioVue). Low quality images with errors in the retina layer segmentations were not included.
En-face OCTA images are located in cropped/octa/disc
and cropped/octa/macula
. In addition, we include a denoised version of these images where only vessels are included. This has been performed automatically using the ROSE algorithm (Ma et al. 2021). These can be found in cropped/GT_OCT_net/noThresh
and cropped/GT_OCT_net/Thresh
, the former contains the probabilities of the ROSE algorithm the latter a binary map.
Synthetic OCT-A
We train a custom conditional generative adversarial network (cGAN) to map a fundus image to an en face OCT-A image. Our model consists of a generator synthesizing en face OCT-A images from corresponding areas in fundus photographs and a discriminator judging the resemblance of the synthesized images to the real en face OCT-A samples. This allows us to avoid the use of manual vessel segmentation maps altogether.
The full images are available at the fov45/synthetic_octa
directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/synthetic_octa/disc
and cropped/synthetic_octa/macula
. In addition, we performed the same denoising ROSE algorithm (Ma et al. 2021) used for the original enface OCT-A images, the results are available in cropped/denoised_synthetic_octa/noThresh
and cropped/denoised_synthetic_octa/Thresh
, the former contains the probabilities of the ROSE algorithm the latter a binary map.
Other Fundus Vessel Segmentations Included
In this dataset, we have also included the output of two recent vessel segmentation algorithms trained on external datasets with manual vessel segmentations. SA-Unet (Li et. al, 2020) and IterNet (Guo et. al, 2021).
-
SA-Unet. The full images are available at the
fov45/SA_Unet
directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected incropped/SA_Unet/disc
andcropped/SA_Unet/macula
. -
IterNet. The full images are available at the
fov45/Iternet
directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected incropped/Iternet/disc
andcropped/Iternet/macula
.
Train/Validation/Test Replication
In order to replicate or compare your model to the results of our paper, we report below the data split used.
-
Training subjects IDs: 1 - 25
-
Validation subjects IDs: 26 - 30
-
Testing subjects IDs: 31 - 112
Data Acquisition
This dataset was acquired at the Texas Medical Center - Memorial Hermann Hospital in accordance with the guidelines from the Helsinki Declaration and it was approved by the UTHealth IRB with protocol HSC-MS-19-0352.
User Agreement
The UT-FSOCTA dataset is free to use for non-commercial scientific research only. In case of any publication the following paper needs to be cited
Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023;13:15325. https://doi.org/10.1038/s41598-023-42062-9.
Funding
This work is supported by the Translational Research Institute for Space Health through NASA Cooperative Agreement NNX16AO69A.
Research Team and Acknowledgements
Here are the people behind this data acquisition effort:
Ivan Coronado, Samiksha Pachade, Rania Abdelkhaleq, Juntao Yan, Sergio Salazar-Marioni, Amanda Jagolino, Mozhdeh Bahrainian, Roomasa Channa, Sunil Sheth, Luca Giancardo
We would also like to acknowledge for their support: the Institute for Stroke and Cerebrovascular Diseases at UTHealth, the VAMPIRE team at University of Dundee, UK and Memorial Hermann Hospital System.
References
Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023;13:15325. https://doi.org/10.1038/s41598-023-42062-9.
C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, "SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation," in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 1236–1242. doi: 10.1109/ICPR48806.2021.9413346.
L. Li, M. Verma, Y. Nakashima, H. Nagahara, and R. Kawasaki, "IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks," 2020 IEEE Winter Conf. Appl. Comput. Vis. WACV, 2020, doi: 10.1109/WACV45572.2020.9093621.
Y. Ma et al., "ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model," IEEE Trans. Med. Imaging, vol. 40, no. 3, pp. 928–939, Mar. 2021, doi: 10.1109/TMI.2020.3042802.
Files
README.md
Files
(1.2 GB)
Name | Size | Download all |
---|---|---|
md5:f8976bc39f8ce5d926ad74f373356ba2
|
6.9 kB | Preview Download |
md5:c0f4987a8c5676d01e14a6a012fae01e
|
1.2 GB | Preview Download |
Additional details
Related works
- Is cited by
- Journal article: 10.1038/s41598-023-42062-9 (DOI)