- -

F-Measure as the error function to train neural networks

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

F-Measure as the error function to train neural networks

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Pastor Pellicer, Joan es_ES
dc.contributor.author Zamora Martínez, Francisco Julián es_ES
dc.contributor.author España Boquera, Salvador es_ES
dc.contributor.author Castro-Bleda, Maria Jose es_ES
dc.date.accessioned 2014-09-24T18:27:40Z
dc.date.issued 2013
dc.identifier.isbn 978-3-642-38678-7
dc.identifier.issn 0302-9743
dc.identifier.uri http://hdl.handle.net/10251/40169
dc.description.abstract Imbalance datasets impose serious problems in machine learning. For many tasks characterized by imbalanced data, the F-Measure seems more appropiate than the Mean Square Error or other error measures. This paper studies the use of F-Measure as the training criterion for Neural Networks by integrating it in the Error-Backpropagation algorithm. This novel training criterion has been validated empirically on a real task for which F-Measure is typically applied to evaluate the quality. The task consists in cleaning and enhancing ancient document images which is performed, in this work, by means of neural filters. es_ES
dc.description.sponsorship This work has been partially supported by MICINN project HITITA (TIN2010-18958) and by the FPI-MICINN (BES-2011-046167) scholarship from Ministerio de Ciencia e Innovación, Gobierno de España.
dc.language Inglés es_ES
dc.publisher Springer Verlag (Germany) es_ES
dc.relation.ispartof Advances in Computational Intelligence es_ES
dc.relation.ispartofseries Lecture Notes in Computer Science;
dc.rights Reserva de todos los derechos es_ES
dc.subject Neural Networks es_ES
dc.subject Error-Backpropagation algorithm es_ES
dc.subject F-Measure es_ES
dc.subject Imbalanced datasets es_ES
dc.subject.classification LENGUAJES Y SISTEMAS INFORMATICOS es_ES
dc.title F-Measure as the error function to train neural networks es_ES
dc.type Capítulo de libro es_ES
dc.embargo.lift 10000-01-01
dc.embargo.terms forever es_ES
dc.identifier.doi 10.1007/978-3-642-38679-4_37
dc.relation.projectID info:eu-repo/grantAgreement/MICINN//TIN2010-18958/ES/HITITA: HERRAMIENTA INTERACTIVA PARA LA TRANSCRIPCION DE IMAGENES DE TEXTOS ANTIGUOS/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/MICINN//BES-2011-046167/ES/BES-2011-046167/
dc.rights.accessRights Cerrado es_ES
dc.contributor.affiliation Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació es_ES
dc.description.bibliographicCitation Pastor Pellicer, J.; Zamora Martínez, FJ.; España Boquera, S.; Castro-Bleda, MJ. (2013). F-Measure as the error function to train neural networks. En Advances in Computational Intelligence. Springer Verlag (Germany). 376-384. https://doi.org/10.1007/978-3-642-38679-4_37 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion http://link.springer.com/chapter/10.1007/978-3-642-38679-4_37 es_ES
dc.description.upvformatpinicio 376 es_ES
dc.description.upvformatpfin 384 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.relation.senia 255332
dc.contributor.funder Ministerio de Ciencia e Innovación
dc.description.references Dembczyński, K., Waegeman, W., Cheng, W., Hüllermeier, E.: An exact algorithm for f-measure maximization. Advances in Neural Information Processing Systems 24, 223–230 (2011) es_ES
dc.description.references Al-Haddad, L., Morris, C.W., Boddy, L.: Training radial basis function neural networks: effects of training set size and imbalanced training sets. J. of Microbiological Methods 43(1), 33–44 (2000) es_ES
dc.description.references Bilmes, J., Asanovic, K., Chin, C.W., Demmel, J.: Using PHiPAC to speed error back-propagation learning. In: Proc. of ICASSP, vol. 5, pp. 4153–4156 (1997) es_ES
dc.description.references Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley (2001) es_ES
dc.description.references Gatos, B., Ntirogiannis, K., Pratikakis, I.: ICDAR 2009 document image binarization contest (DIBCO 2009). In: Proc. of ICDAR, pp. 1375–1382 (2009) es_ES
dc.description.references Gatos, B., Ntirogiannis, K., Pratikakis, I.: DIBCO 2009: document image binarization contest. Int. J. on Document Analysis and Recognition 14(1), 35–44 (2011) es_ES
dc.description.references Hidalgo, J.L., España, S., Castro, M.J., Pérez, J.A.: Enhancement and cleaning of handwritten data by using neural networks. In: Marques, J.S., Pérez de la Blanca, N., Pina, P. (eds.) IbPRIA 2005. LNCS, vol. 3522, pp. 376–383. Springer, Heidelberg (2005) es_ES
dc.description.references Jansche, M.: Maximum expected f-measure training of logistic regression models. In: Proc. of HLT & EMNLP, pp. 692–699 (2005) es_ES
dc.description.references Musicant, D.R., Kumar, V., Ozgur, A.: Optimizing f-measure with support vector machines. In: Proc. of Int. Florida AI Research Society Conference, pp. 356–360 (2003) es_ES
dc.description.references Ntirogiannis, K., Gatos, B., Pratikakis, I.: A Performance Evaluation Methodology for Historical Document Image Binarization (2012) es_ES
dc.description.references Pratikakis, I., Gatos, B., Ntirogiannis, K.: ICFHR 2012 Competition on Handwritten Document Image Binarization (H-DIBCO 2012) (2012) es_ES
dc.description.references Pratikakis, I., Gatos, B., Ntirogiannis, K.: H-DIBCO 2010-handwritten document image binarization competition. In: Proc. of ICFHR, pp. 727–732 (2010) es_ES
dc.description.references van Rijsbergen, C.J.: A theoretical basis for the use of co-occurrence data in information retrieval. J. of Documentation 33(2), 106–119 (1977) es_ES
dc.description.references Wolf, C.: Document Ink Bleed-Through Removal with Two Hidden Markov Random Fields and a Single Observation Field. IEEE PAMI 32(3), 431–447 (2010) es_ES
dc.description.references Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. on Knowledge and Data Engineering 18(1), 63–77 (2006) es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem