Resumen:
|
[EN] Recent works show that Generative Adversarial Networks (GANs) can be successfully applied to image synthesis and semi-supervised learning, where, given a small labelled database and a large unlabelled database, the ...[+]
[EN] Recent works show that Generative Adversarial Networks (GANs) can be successfully applied to image synthesis and semi-supervised learning, where, given a small labelled database and a large unlabelled database, the goal is to train a powerful classifier. In this paper, we trained a retinal image synthesizer and a semi-supervised learning method for automatic glaucoma assessment using an adversarial model on a small glaucoma-labelled and large unlabelled database. Various studies have shown that glaucoma can be monitored by analyzing the optic disc and its surroundings, for that reason the images used in this work were automatically cropped around the optic disc. The novelty of this work is to propose a new retinal image synthesizer and a semi-supervised learning method for glaucoma assessment based on the Deep Convolutional Generative Adversarial Networks (DCGAN). In addition, and to the best of the authors' knowledge, this system is trained on an unprecedented number of publicly available images (86926 images). This system, hence, is not only able to generate images synthetically but to provide labels automatically. Synthetic images were qualitatively evaluated using t-SNE plots of features associated with the images and their anatomical consistency were estimated by measuring the proportion of pixels corresponding to the anatomical structures around the optic disc. The resulting image synthesizer is able to generate realistic (cropped) retinal images and, subsequently, the glaucoma classifier is able to classify them into glaucomatous and normal with high accuracy (AUC=0.9017). The obtained retinal image synthesizer and the glaucoma classifier could be used then to generate an unlimited number of cropped retinal images with glaucoma labels.
[-]
|
Agradecimientos:
|
This work was supported by the Project GALAHAD [H2020-ICT-2016-2017, 732613]. In particular, thework of Andres Diaz-Pinto has been supported by the Generalitat Valenciana under the scholarship Santiago Grisolía ...[+]
This work was supported by the Project GALAHAD [H2020-ICT-2016-2017, 732613]. In particular, thework of Andres Diaz-Pinto has been supported by the Generalitat Valenciana under the scholarship Santiago Grisolía [GRISOLIA/2015/027]. The work of Adrián Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889].
[-]
|