- -

Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Friginal López, Jesús es_ES
dc.contributor.author Martínez, Miquel es_ES
dc.contributor.author De Andrés, David es_ES
dc.contributor.author Ruiz, Juan-Carlos es_ES
dc.date.accessioned 2017-05-08T11:42:38Z
dc.date.available 2017-05-08T11:42:38Z
dc.date.issued 2016-01
dc.identifier.issn 0164-1212
dc.identifier.uri http://hdl.handle.net/10251/80735
dc.description This is the author’s version of a work that was accepted for publication in The Journal of Systems and Software. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software, 111, 2016. DOI 10.1016/j.jss.2015.08.052. es_ES
dc.description.abstract Benchmarks enable the comparison of computer-based systems attending to a variable set of criteria, such as dependability, security, performance, cost and/or power consumption. It is not despite its difficulty, but rather its mathematical accuracy that multi-criteria analysis of results remains today a subjective process rarely addressed in an explicit way in existing benchmarks. It is thus not surprising that industrial benchmarks only rely on the use of a reduced set of easy-to-understand measures, specially when considering complex systems. This is a way to keep the process of result interpretation straightforward, unambiguous and accurate. However, it limits at the same time the richness and depth of the analysis process. As a result, the academia prefers to characterize complex systems with a wider set of measures. Marrying the requirements of industry and academia in a single proposal remains a challenge today. This paper addresses this question by reducing the uncertainty of the analysis process using quality (score-based) models. At measure definition time, these models make explicit (i) which are the requirements imposed to each type of measure, that may vary from one context of use to another, and (ii) which is the type, and intensity, of the relation between considered measures. At measure analysis time, they provide a consistent, straightforward and unambiguous method to interpret resulting measures. The methodology and its practical use are illustrated through three different case studies from the dependability benchmarking domain, a domain where various different criteria, including both performance and dependability, are typically considered during analysis of benchmark results.. Although the proposed approach is limited to dependability benchmarks in this document, its usefulness for any type of benchmark seems quite evident attending to the general formulation of the provided solution. © 2015 Elsevier Inc. All rights reserved. es_ES
dc.description.sponsorship This work is partially supported by the Spanish project ARENES (TIN2012-38308-C02-01), ANR French project AMORES (ANR-11-INSE-010), the Intel Doctoral Student Honour Programme 2012, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) from the Universitat Politecnica de Valencia. en_EN
dc.language Inglés es_ES
dc.publisher Elsevier es_ES
dc.relation.ispartof Journal of Systems and Software es_ES
dc.rights Reserva de todos los derechos es_ES
dc.subject Multiple-Criteria Decision Making (MCDM) es_ES
dc.subject Dependability benchmarking es_ES
dc.subject Quality models es_ES
dc.subject.classification ARQUITECTURA Y TECNOLOGIA DE COMPUTADORES es_ES
dc.title Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.1016/j.jss.2015.08.052
dc.relation.projectID info:eu-repo/grantAgreement/MINECO//TIN2012-38308-C02-01/ES/ADAPTIVE AND RESILIENT NETWORKED EMBEDDED SYSTEMS/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/ANR//ANR-11-INSE-0009/FR/Architecture for MObiquitous REsilient Systems/AMORES/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica es_ES
dc.description.bibliographicCitation Friginal López, J.; Martínez, M.; De Andrés, D.; Ruiz, J. (2016). Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software. 111:105-118. https://doi.org/10.1016/j.jss.2015.08.052 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion http://dx.doi.org/10.1016/j.jss.2015.08.052 es_ES
dc.description.upvformatpinicio 105 es_ES
dc.description.upvformatpfin 118 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 111 es_ES
dc.relation.senia 293887 es_ES
dc.contributor.funder Agence Nationale de la Recherche, Francia es_ES
dc.contributor.funder Ministerio de Economía y Competitividad es_ES
dc.contributor.funder Universitat Politècnica de València es_ES
dc.contributor.funder Intel Corporation es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem