Mostrar el registro sencillo del ítem
dc.contributor.author | Díaz-Pinto, Andrés Yesid | es_ES |
dc.contributor.author | Morales, Sandra | es_ES |
dc.contributor.author | Naranjo Ornedo, Valeriana | es_ES |
dc.contributor.author | Köhler, Thomas | es_ES |
dc.contributor.author | Mossi García, José Manuel | es_ES |
dc.contributor.author | Navea, Amparo | es_ES |
dc.date.accessioned | 2020-04-17T12:50:02Z | |
dc.date.available | 2020-04-17T12:50:02Z | |
dc.date.issued | 2019-03-20 | es_ES |
dc.identifier.issn | 1475-925X | es_ES |
dc.identifier.uri | http://hdl.handle.net/10251/140902 | |
dc.description.abstract | [EN] Background: Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, Convolutional Neural Networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. Methods: In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. Results: Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92% - 97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors' knowledge, all publicly available glaucoma-labelled databases. Conclusions:These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and test the five CNNs are publicly available, which could be used as a testbed for further comparisons. | es_ES |
dc.description.sponsorship | This work was supported by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA [TIN2013-46751-R] and the Project GALAHAD [H2020-ICT-2016-2017, 732613]. In particular, the work of Andres Diaz-Pinto has been supported by the Generalitat Valenciana under the scholarship Santiago Grisolia [GRISOLIA/2015/027]. (Corresponding author: Andres Diaz-Pinto). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Springer (Biomed Central Ltd.) | es_ES |
dc.relation.ispartof | BioMedical Engineering OnLine | es_ES |
dc.rights | Reconocimiento (by) | es_ES |
dc.subject | Glaucoma | es_ES |
dc.subject | ACRIMA database | es_ES |
dc.subject | Fundus Images | es_ES |
dc.subject | CNN | es_ES |
dc.subject | Fine-Tuning | es_ES |
dc.subject.classification | TEORIA DE LA SEÑAL Y COMUNICACIONES | es_ES |
dc.title | CNNs for Automatic Glaucoma Assessment using Fundus Images: An Extensive Validation | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1186/s12938-019-0649-y | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/H2020/732613/EU/Glaucoma – Advanced, LAbel-free High resolution Automated OCT Diagnostics/ | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/GVA//GRISOLIA%2F2015%2F027/ | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/MINECO//TIN2013-46751-R/ES/ANALISIS DE IMAGEN DE FONDO DE OJO PARA CRIBADO AUTOMATICO DE ENFERMEDADES OFTALMOLOGICAS/ | es_ES |
dc.rights.accessRights | Abierto | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Departamento de Comunicaciones - Departament de Comunicacions | es_ES |
dc.description.bibliographicCitation | Díaz-Pinto, AY.; Morales, S.; Naranjo Ornedo, V.; Köhler, T.; Mossi García, JM.; Navea, A. (2019). CNNs for Automatic Glaucoma Assessment using Fundus Images: An Extensive Validation. BioMedical Engineering OnLine. 18(29):1-19. https://doi.org/10.1186/s12938-019-0649-y | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | https://doi.org/10.1186/s12938-019-0649-y | es_ES |
dc.description.upvformatpinicio | 1 | es_ES |
dc.description.upvformatpfin | 19 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 18 | es_ES |
dc.description.issue | 29 | es_ES |
dc.relation.pasarela | S\377542 | es_ES |
dc.contributor.funder | Generalitat Valenciana | es_ES |
dc.contributor.funder | Ministerio de Economía y Empresa | es_ES |
dc.description.references | World Health Organization. Bulletin of the World Health Organization, Vol 82(11). 2004. http://www.who.int/bulletin/volumes/82/11/en/infocus.pdf?ua=1 . Accessed 5 May 2016. | es_ES |
dc.description.references | Bourne RRA. Worldwide glaucoma through the looking glass. Br J Ophthalmol. 2006;90:253–4. https://doi.org/10.1136/bjo.2005.083527 . | es_ES |
dc.description.references | Bock R, Meier J, Nyúl LG, Hornegger J, Michelson G. Glaucoma risk index: automated glaucoma detection from color fundus images. Med Image Anal. 2010;14:471–81. https://doi.org/10.1016/j.media.2009.12.006 . | es_ES |
dc.description.references | Sivaswamy J, Krishnadas SR, Joshi GD, Jain M, Ujjwal A, ST Drishti. Retinal image dataset for optic nerve head (ONH) segmentation. In: 2014 IEEE 11th international symposium on biomedical imaging (ISBI). 2014, p. 53–6. https://doi.org/10.1109/ISBI.2014.6867807 . | es_ES |
dc.description.references | Morales S, Naranjo V, Angulo J, Alcañiz M. Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans Med Imag. 2013;32:786–96. https://doi.org/10.1109/TMI.2013.2238244 . | es_ES |
dc.description.references | Wong DWK, Liu J, Lim JH, Jia X, Yin F, Li H, Wong TY. Level-set based automatic cup-to-disc ratio determination using retinal fundus images in ARGALI. In: 30th Annual International IEEE EMBS Conference, vol. 30. 2008, p. 2266–9. https://doi.org/10.1109/IEMBS.2008.4649648 . | es_ES |
dc.description.references | Joshi GD, Sivaswamy J, Krishnadas SR. Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Trans Med Imag. 2011;30:1192–205. https://doi.org/10.1109/TMI.2011.2106509 . | es_ES |
dc.description.references | Yin F, Liu J, Wong DWK, Tan NM, Cheung C, Baskaran M, Aung T, Wong TY. Automated segmentation of optic disc and optic cup in fundus images for glaucoma diagnosis. In: 2012 25th IEEE international symposium on computer-based medical systems (CBMS). 2012, p. 1–6. https://doi.org/10.1109/CBMS.2012.6266344 . | es_ES |
dc.description.references | Cheng J, Liu J, Xu Y, Yin F, Wong DWK, Tan N-M, Tao D, Cheng C-Y, Aung T, Wong TY. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. In: IEEE transactions on medical imaging, vol. 32. 2013, p. 1019–32. https://doi.org/10.1109/TMI.2013.2247770 . | es_ES |
dc.description.references | Diaz-Pinto A, Morales S, Naranjo V, Alcocer P, Lanzagorta A. Glaucoma diagnosis by means of optic cup feature analysis in color fundus images. In: 24th European signal processing conference (EUSIPCO), vol. 24. 2016, p. 2055–9. https://doi.org/10.1109/EUSIPCO.2016.7760610 . | es_ES |
dc.description.references | Liu J, Zhang Z, Wong DWK, Xu Y, Yin F, Cheng J, Tan NM, Kwoh CK, Xu D, Tham YC, Aung T, Wong TY. Automatic glaucoma diagnosis through medical imaging informatics. J Am Med Inf Assoc. 2013;1:1021–7. https://doi.org/10.1136/amiajnl-2012-001336 . | es_ES |
dc.description.references | LeCun Y. Generalization and network design strategies., Technical Report CRG-TR-89-4, University of TorontoNew York: Elsevier; 1989. | es_ES |
dc.description.references | Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. ImageNet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52. https://doi.org/10.1007/s11263-015-0816-y . | es_ES |
dc.description.references | Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–58. | es_ES |
dc.description.references | Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. ArXiv e-prints arxiv:abs/1409.1556 . | es_ES |
dc.description.references | Carneiro G, Nascimento J, Bradley AP. In: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Unregistered multiview mammogram analysis with pre-trained deep learning models. 2015, p. 652–60. Cham: Springer. https://doi.org/10.1007/978-3-319-24574-4_78 . | es_ES |
dc.description.references | Chen H, Ni D, Qin J, Li S, Yang X, Wang T, Heng PA. Standard plane localization in fetal ultrasound via domain transferred deep neural networks. IEEE J Biomed Health Inf. 2015;19(5):1627–36. https://doi.org/10.1109/JBHI.2015.2425041 . | es_ES |
dc.description.references | Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imag. 2016;35(5):1299–312. https://doi.org/10.1109/TMI.2016.2535302 . | es_ES |
dc.description.references | Yaniv B, Idit D, Lior W, Hayit G. Deep learning with non-medical training used for chest pathology identification. In: Proceedings of SPIE 9414, medical imaging 2015: computer-aided diagnosis. 2015, p. 94140–7. https://doi.org/10.1117/12.2083124 . | es_ES |
dc.description.references | Razavian AS, Azizpour H, Sullivan J, Carlsson S. CNN features off-the-shelf: an astounding baseline for recognition. 2014. arXiv e-prints arxiv:abs/1403.6382 . | es_ES |
dc.description.references | Chen X, Xu Y, Wong DWK, Wong TY, Liu J. Glaucoma detection based on deep convolutional neural network. In: 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC). 2015, p. 715–8. https://doi.org/10.1109/EMBC.2015.7318462 . | es_ES |
dc.description.references | Alghamdi HS, Tang HL, A.Waheeb S, Peto T. Automatic optic disc abnormality detection in fundus images: a deep learning approach. In: OMIA3 (MICCAI 2016). 2016, p. 17–24. https://doi.org/10.17077/omia.1042 . | es_ES |
dc.description.references | Abbas Q. Glaucoma-deep: detection of glaucoma eye disease on retinal fundus images using deep learning. Int J Adv Comput Sci Appl. 2017;8(6):41–5. https://doi.org/10.14569/IJACSA.2017.080606 . | es_ES |
dc.description.references | Orlando JI, Prokofyeva E, del Fresno M, Blaschko MB. Convolutional neural network transfer for automated glaucoma identification. In: SPIE proceedings. 2017, p. 10160–10. https://doi.org/10.1117/12.2255740 . | es_ES |
dc.description.references | Budai A, Bock R, Maier A, Hornegger J, Michelson G. Robust vessel segmentation in fundus images. Int J Biomed Imag. 2013. https://doi.org/10.1155/2013/154860 . | es_ES |
dc.description.references | Fumero F, Alayon S, Sanchez JL, Sigut J, Gonzalez-Hernandez M. RIM-ONE: an open retinal image database for optic nerve evaluation. In: 2011 24th international symposium on computer-based medical systems (CBMS). 2011, p. 1–6. https://doi.org/10.1109/CBMS.2011.5999143 . | es_ES |
dc.description.references | sjchoi86: sjchoi86-HRF Database. GitHub. 2017. Accessed 2 Feb 2017. | es_ES |
dc.description.references | Chollet F, et al. Keras. GitHub. 2015. Accessed 21 Feb 2017. | es_ES |
dc.description.references | Xu P, Wan C, Cheng J, Niu D, Liu J. Optic disc detection via deep learning in fundus images. Fetal, infant and ophthalmic medical image analysis. Cham: Springer; 2017. p. 134–41. | es_ES |
dc.description.references | Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR). 2015, p. 1–9. https://doi.org/10.1109/CVPR.2015.7298594 . | es_ES |
dc.description.references | Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: The IEEE conference on computer vision and pattern recognition (CVPR). 2016, p. 2818–26. | es_ES |
dc.description.references | He K, Zhang X, Ren S, Sun J Deep residual learning for image recognition. In: The IEEE conference on computer vision and pattern recognition (CVPR). 2016. | es_ES |
dc.description.references | Chollet F. Xception: Deep learning with depthwise separable convolutions. ArXiv e-prints. 2016. arxiv:abs/1610.02357 . | es_ES |
dc.description.references | Hastie T, Tibshirani R, Friedman J. The elements of statistical learning. Cham: Springer; 2009. https://doi.org/10.1007/978-0-387-84858-7 . | es_ES |
dc.description.references | Mason SJ, Graham NE. Areas beneath the relative operating characteristics (roc) and relative operating levels (rol) curves: statistical significance and interpretation. Q J R Meteorol Soc. 2002. https://doi.org/10.1256/003590002320603584 . | es_ES |
dc.description.references | Mann HB, Whitney DR. On a test of whether one of two random variables is stochastically larger than the other. Ann Math Statist. 1947;18(1):50–60. https://doi.org/10.1214/aoms/1177730491 . | es_ES |
dc.description.references | Chakravarty A, Sivaswamy J Glaucoma classification with a fusion of segmentation and image-based features. In: 2016 IEEE 13th international symposium on biomedical imaging (ISBI). 2016, p. 689–92. https://doi.org/10.1109/ISBI.2016.7493360 . | es_ES |