- -

Retinal Image Synthesis for Glaucoma Assessment using DCGAN and VAE Models

RiuNet: Institutional repository of the Polithecnic University of Valencia

Share/Send to

Cited by

Statistics

Retinal Image Synthesis for Glaucoma Assessment using DCGAN and VAE Models

Show simple item record

Files in this item

dc.contributor.author Díaz-Pinto, Andrés Yesid es_ES
dc.contributor.author Colomer, Adrián es_ES
dc.contributor.author Naranjo Ornedo, Valeriana es_ES
dc.contributor.author Morales, Sandra es_ES
dc.contributor.author Xu, Yanwu es_ES
dc.contributor.author Frangi, Alejandro F. es_ES
dc.date.accessioned 2019-07-24T10:23:00Z
dc.date.available 2019-07-24T10:23:00Z
dc.date.issued 2019-07-24T10:23:00Z
dc.identifier.isbn 978-3-030-03492-4
dc.identifier.uri http://hdl.handle.net/10251/124078
dc.description.abstract The performance of a Glaucoma assessment system is highly affected by the number of labelled images used during the training stage. However, labelled images are often scarce or costly to obtain. In this paper, we address the problem of synthesising retinal fundus images by training a Variational Autoencoder and an adversarial model on 2357 retinal images. The innovation of this approach is in synthesising retinal images without using previous vessel segmentation from a separate method, which makes this system completely independent. The obtained models are image synthesizers capable of generating any amount of cropped retinal images from a simple normal distribution. Furthermore, more images were used for training than any other work in the literature. Synthetic images were qualitatively evaluated by 10 clinical experts and their consistency were estimated by measuring the proportion of pixels corresponding to the anatomical structures around the optic disc. Moreover, we calculated the mean-squared error between the average 2D-histogram of synthetic and real images, obtaining a small difference of 3e-4. Further analysis of the latent space and cup size of the images was performed by measuring the Cup/Disc ratio of synthetic images using a state-of-the-art method. The results obtained from this analysis and the qualitative and quantitative evaluation demonstrate that the synthesised images are anatomically consistent and the system is a promising step towards a model capable of generating labelled images. es_ES
dc.description.sponsorship We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. This work was supported by the Project GALAHAD [H2020-ICT-2016-2017, 732613] es_ES
dc.format.extent 9 es_ES
dc.language Inglés es_ES
dc.publisher Springer es_ES
dc.relation.ispartof Intelligent Data Engineering and Automated Learning – IDEAL 2018 es_ES
dc.relation.ispartofseries Lecture Notes in Computer Science;11314
dc.rights Reserva de todos los derechos es_ES
dc.subject Medical imaging es_ES
dc.subject Retinal Image Synthesis es_ES
dc.subject Fundus Images es_ES
dc.subject DCGAN es_ES
dc.subject VAE es_ES
dc.subject.classification TEORIA DE LA SEÑAL Y COMUNICACIONES es_ES
dc.title Retinal Image Synthesis for Glaucoma Assessment using DCGAN and VAE Models es_ES
dc.type Capítulo de libro es_ES
dc.type Comunicación en congreso es_ES
dc.identifier.doi 10.1007/978-3-030-03493-1_24
dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/732613/EU es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Departamento de Comunicaciones - Departament de Comunicacions es_ES
dc.description.bibliographicCitation Díaz-Pinto, AY.; Colomer, A.; Naranjo Ornedo, V.; Morales, S.; Xu, Y.; Frangi, AF. (2019). Retinal Image Synthesis for Glaucoma Assessment using DCGAN and VAE Models. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 224-232. https://doi.org/10.1007/978-3-030-03493-1_24 es_ES
dc.description.accrualMethod S es_ES
dc.relation.conferencename International Conference on Intelligent Data Engineering and Automated Learning (IDEAL) es_ES
dc.relation.conferencedate Noviembre 21-23,2018 es_ES
dc.relation.conferenceplace Madrid, Spain es_ES
dc.relation.publisherversion http://dx.doi.org/10.1007/978-3-030-03493-1_24 es_ES
dc.description.upvformatpinicio 224 es_ES
dc.description.upvformatpfin 232 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.relation.pasarela S\369986 es_ES
dc.contributor.funder European Commission es_ES
dc.relation.references Chen, X., Xu, Y., Yan, S., Wong, D.W.K., Wong, T.Y., Liu, J.: Automatic feature learning for glaucoma detection based on deep learning. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 669–677. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_80 es_ES
dc.relation.references Fiorini, S., Biasi, M.D., Ballerini, L., Trucco, E., Ruggeri, A.: Automatic generation of synthetic retinal fundus images. In: Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference. The Eurographics Association (2014) es_ES
dc.relation.references Bonaldi, L., Menti, E., Ballerini, L., Ruggeri, A., Trucco, E.: Automatic generation of synthetic retinal fundus images: vascular network. Proc. Comput. Sci. 90(Suppl. C), 54–60 (2016) es_ES
dc.relation.references Costa, P., et al.: End-to-end adversarial retinal image synthesis. IEEE Trans. Med. Imaging 37(3), 781–791 (2018) es_ES
dc.relation.references Costa, P., et al.: Towards adversarial retinal image synthesis. arXiv:1701.08974 (2017) es_ES
dc.relation.references Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. arXiv:1312.6114 (2013) es_ES
dc.relation.references Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv: 1511.06434 , November 2015 es_ES
dc.relation.references Köhler, T., Budai, A., Kraus, M.F., Odstrčilik, J., Michelson, G., Hornegger., J.: Automatic no-reference quality assessment for retinal fundus images using vessel segmentation. In: Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, pp. 95–100 (2013) es_ES
dc.relation.references Sivaswamy, J., Krishnadas, S., Joshi, G.D., Jain, M., Ujjwal, A.S.T.: Drishti-GS: retinal image dataset for optic nerve head (ONH) segmentation. In: 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), pp. 53–56 (2014) es_ES
dc.relation.references Zhang, Z., et al.: ORIGA-light: an online retinal fundus image database for glaucoma analysis and research. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 3065–3068, August 2010 es_ES
dc.relation.references Medina-Mesa, E., et al.: Estimating the amount of hemoglobin in the neuroretinal rim using color images and OCT. Curr. Eye Res. 41(6), 798–805 (2015) es_ES
dc.relation.references sjchoi86: sjchoi86-HRF Database (2017). https://github.com/sjchoi86/retina_dataset/tree/master/dataset . Accessed 02 July 2017 es_ES
dc.relation.references Chollet, F., et al.: Keras (2015). https://github.com/fchollet/keras . Accessed 21 May 2017 es_ES
dc.relation.references Theis, L., van den Oord, A., Bethge, M.: A note on the evaluation of generative models. In: International Conference on Learning Representations, April 2016 es_ES
dc.relation.references Morales, S., Naranjo, V., Navea, A., Alcañiz, M.: Computer-aided diagnosis software for hypertensive risk determination through fundus image processing. IEEE J. Biomed. Health Inform. 18(6), 1757–1763 (2014) es_ES
dc.relation.references White, T.: Sampling generative networks. arXiv:1609.04468 (2016) es_ES
dc.relation.references Fu, H., Cheng, J., Xu, Y., Wong, D.W.K., Liu, J., Cao, X.: Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging (2018) es_ES


This item appears in the following Collection(s)

Show simple item record