- -

On the effect of calibration in classifier combination

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

On the effect of calibration in classifier combination

Mostrar el registro completo del ítem

Bella Sanjuán, A.; Ferri Ramírez, C.; José Hernández-Orallo; Ramírez Quintana, MJ. (2012). On the effect of calibration in classifier combination. Applied Intelligence. 38(4):566-585. https://doi.org/10.1007/s10489-012-0388-2

Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10251/38005

Ficheros en el ítem

Metadatos del ítem

Título: On the effect of calibration in classifier combination
Autor: Bella Sanjuán, Antonio Ferri Ramírez, César José Hernández-Orallo Ramírez Quintana, María José
Entidad UPV: Universitat Politècnica de València. Departamento de Sistemas Informáticos y Computación - Departament de Sistemes Informàtics i Computació
Fecha difusión:
Resumen:
A general approach to classifier combination considers each model as a probabilistic classifier which outputs a class membership posterior probability. In this general scenario, it is not only the quality and diversity of ...[+]
Palabras clave: Classi¿er combination , Classifier calibration , Classifier diversity , Probability estimation , Calibration measures , Separability measures
Derechos de uso: Cerrado
Fuente:
Applied Intelligence. (issn: 0924-669X )
DOI: 10.1007/s10489-012-0388-2
Editorial:
Springer Verlag (Germany)
Versión del editor: http://dx.doi.org/10.1007/s10489-012-0388-2
Código del Proyecto:
info:eu-repo/grantAgreement/MEC//CSD2007-00022/ES/Agreement Technologies/
info:eu-repo/grantAgreement/COST//IC0801/EU/Agreement Technologies/
info:eu-repo/grantAgreement/Generalitat Valenciana//PROMETEO08%2F2008%2F051/ES/Advances on Agreement Technologies for Computational Entities (atforce)/
info:eu-repo/grantAgreement/MICINN//TIN2010-21062-C02-02/ES/SWEETLOGICS-UPV/
Agradecimientos:
We thank the anonymous reviewers for their comments, which have helped to improve this paper significantly. This work was supported by the MEC/MINECO projects CONSOLIDER-INGENIO CSD2007-00022, COST action IC0801 and TIN ...[+]
Tipo: Artículo

References

Amemiya T (1973) Regression analysis when the dependent variable is truncated normal. Econometrica 41(6):997–1016

Ayer M, Brunk H, Ewing G, Reid W, Silverman E (1955) An empirical distribution function for sampling with incomplete information. Ann Math Stat 5:641–647

Bella A, Ferri C, Hernandez-Orallo J, Ramirez-Quintana M (2009) Calibration of machine learning models. In: Handbook of research on machine learning applications. IGI Global, Hershey, pp 128–146 [+]
Amemiya T (1973) Regression analysis when the dependent variable is truncated normal. Econometrica 41(6):997–1016

Ayer M, Brunk H, Ewing G, Reid W, Silverman E (1955) An empirical distribution function for sampling with incomplete information. Ann Math Stat 5:641–647

Bella A, Ferri C, Hernandez-Orallo J, Ramirez-Quintana M (2009) Calibration of machine learning models. In: Handbook of research on machine learning applications. IGI Global, Hershey, pp 128–146

Bella A, Ferri C, Hernández-Orallo J, Ramírez-Quintana M (2009) Similarity-binning averaging: a generalisation of binning calibration. In: Intelligent data engineering and automated learning—IDEAL 2009. Lecture notes in computer science, vol 5788. Springer, Berlin/Heidelberg, pp 341–349

Bennett PN (2006) Building reliable metaclassifiers for text learning. PhD thesis, Carnegie Mellon University

Bennett PN, Dumais ST, Horvitz E (2005) The combination of text classifiers using reliability indicators. Inf Retr 8(1):67–98

Blake C, Merz C (1998) UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html

Breiman L (1996) Bagging predictors. Mach Learn 24:123–140

Brier G (1950) Verification of forecasts expressed in terms of probabilities. Mon Weather Rev 78:1–3

Brümmer N (2010) Measuring, refining and calibrating speaker and language information extracted from speech. PhD thesis, University of Stellenbosch

Canuto A, Santos A, Vargas R (2011) Ensembles of artmap-based neural networks: an experimental study. Appl Intell 35:1–17

Caruana R, Munson A, Mizil AN (2006) Getting the most out of ensemble selection. In: ICDM ’06: proceedings of the sixth international conference on data mining. IEEE Computer Society, Washington, pp 828–833

Caruana R, Niculescu-Mizil A (2004) Data mining in metric space: an empirical analysis of supervised learning performance criteria. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’04. ACM Press, New York, pp 69–78

Cohen I, Goldszmidt M (2004) Properties and benefits of calibrated classifiers. In: Proceedings of the 8th European conference on principles and practice of knowledge discovery in databases, PKDD ’04. Springer, Berlin, pp 125–136

Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

Dietterich TG (2000) Ensemble methods in machine learning. In: Proceedings of the first international workshop on multiple classifier systems, MCS ’00. Springer, London, pp 1–15

Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 40:139–157

Fahim M, Fatima I, Lee S, Lee Y (2012) Eem: evolutionary ensembles model for activity recognition in smart homes. Appl Intell, 1–11. doi: 10.1007/s10489-012-0359-7

Ferri C, Flach P, Hernández-Orallo J (2004) Delegating classifiers. In: Proceedings of the twenty-first international conference on machine learning, ICML ’04. ACM Press, New York, pp 37–45

Ferri C, Hernández-Orallo J, Modroiu R (2009) An experimental comparison of performance measures for classification. Pattern Recognit Lett 30:27–38

Ferri C, Hernández-Orallo J, Salido M (2003) Volume under the ROC surface for multi-class problems. Exact computation and evaluation of approximations. In: Proceedings of 14th European conference on machine learning, pp 108–120

Flach P, Blockeel H, Ferri C, Hernández-Orallo J, Struyf J (2003) Decision support for data mining: an introduction to ROC analysis and its applications. In: Data mining and decision support: integration and collaboration. Kluwer Academic, Boston, pp 81–90

Freund Y, Schapire RE (1996) Experiments with a new boosting algorithm. In: International conference on machine learning, pp 148–156

Gama J, Brazdil P (2000) Cascade generalization. Mach Learn 41:315–343

Garczarek U (2002) Classification rules in standardized partition spaces. PhD thesis, Universitat Dortmund

Gebel M (2009) Multivariate calibration of classifier scores into the probability space. PhD thesis, University of Dortmund

Hand DJ, Till RJ (2001) A simple generalisation of the area under the ROC curve for multiple class classification problems. Mach Learn 45:171–186

Hoeting JA, Madigan D, Raftery AE, Volinsky CT (1999) Bayesian model averaging: a tutorial. Stat Sci 14(4):382–417

Khor K, Ting C, Phon-Amnuaisuk S (2012) A cascaded classifier approach for improving detection rates on rare attack categories in network intrusion detection. Appl Intell 36:320–329

Kuncheva LI (2002) A theoretical study on six classifier fusion strategies. IEEE Trans Pattern Anal Mach Intell 24:281–286

Kuncheva LI (2004) Combining pattern classifiers: methods and algorithms. Wiley-Interscience, New York

Kuncheva LI (2005) Diversity in multiple classifier systems. Inf Fusion 6(1):3–4

Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51:181–207

Lee H, Kim E, Pedrycz W (2012) A new selective neural network ensemble with negative correlation. Appl Intell, 1–11. doi: 10.1007/s10489-012-0342-3

Maudes J, Rodríguez J, García-Osorio C, Pardo C (2011) Random projections for linear svm ensembles. Appl Intell 34:347–359

Murphy AH (1972) Scalar and vector partitions of the probability score: part II. n-State situation. J Appl Meteorol 11:1182–1192

Platt JC (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: Advances in large margin classifiers. MIT Press, Boston, pp 61–74

Raftery AE, Gneiting T, Balabdaoui F, Polakowski M (2005) Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Rev, p 133

Rifkin R, Klautau A (2004) In defense of one-vs-all classification. J Mach Learn Res 5:101–141

Robertson T, Wright FT, Dykstra RL (1988) Order restricted statistical inference. Wiley, New York

Souza L, Pozo A, Rosa J, Neto A (2010) Applying correlation to enhance boosting technique using genetic programming as base learner. Appl Intell 33:291–301

Tulyakov S, Jaeger S, Govindaraju V, Doermann D (2008) Review of classifier combination methods. In: Marinai HFS (ed) Studies in computational intelligence: machine learning in document analysis and recognition. Springer, Berlin, pp 361–386

Verma B, Hassan S (2011) Hybrid ensemble approach for classification. Appl Intell 34:258–278

Wang C, Hunter A (2010) A low variance error boosting algorithm. Appl Intell 33:357–369

Witten IH, Frank E (2002) Data mining: practical machine learning tools and techniques with java implementations. SIGMOD Rec 31:76–77

Wolpert DH (1992) Stacked generalization. Neural Netw 5:241–259

Zadrozny B, Elkan C (2002) Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’02. ACM Press, New York, pp 694–699

[-]

recommendations

 

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro completo del ítem