Malinverno, L.; Barros, V.; Ghisoni, F.; Visonà, G.; Kern, R.; Nickel, PJ.; Ventura, BE.... (2023). A historical perspective of biomedical explainable AI research. Patterns. 4(9). https://doi.org/10.1016/j.patter.2023.100830
Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10251/212440
Title:
|
A historical perspective of biomedical explainable AI research
|
Author:
|
Malinverno, Luca
Barros, Vesna
Ghisoni, Francesco
Visonà, Giovanni
Kern, Roman
Nickel, Philip J.
Ventura, Barbara Elvira
Simic, Ilija
Stryeck, Sarah
Manni, Francesca
Ferri Ramírez, César
Jean-Quartier, Claire
Genga, Laura
Schweikert, Gabriele
Lovric, Mario
|
UPV Unit:
|
Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica
|
Issued date:
|
|
Abstract:
|
[EN] The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into ...[+]
[EN] The black-box nature of most artificial intelligence (AI) models encourages the development of explainability methods to engender trust into the AI decision-making process. Such methods can be broadly categorized into two main types: post hoc explanations and inherently interpretable algorithms. We aimed at analyzing the possible associations between COVID-19 and the push of explainable AI (XAI) to the forefront of biomed-ical research. We automatically extracted from the PubMed database biomedical XAI studies related to con-cepts of causality or explainability and manually labeled 1,603 papers with respect to XAI categories. To compare the trends pre-and post-COVID-19, we fit a change point detection model and evaluated significant changes in publication rates. We show that the advent of COVID-19 in the beginning of 2020 could be the driving factor behind an increased focus concerning XAI, playing a crucial role in accelerating an already evolving trend. Finally, we present a discussion with future societal use and impact of XAI technologies and potential future directions for those who pursue fostering clinical trust with interpretable machine learning models.
[-]
|
Subjects:
|
Artificial intelligence (AI)
,
Black-box
,
Explainability
,
Trust
,
Decision-making process
,
Post hoc explanations
,
Inherently interpretable algorithms
,
COVID-19
|
Copyrigths:
|
Reconocimiento (by)
|
Source:
|
Patterns. (eissn:
2666-3899
)
|
DOI:
|
10.1016/j.patter.2023.100830
|
Publisher:
|
Cell Press
|
Publisher version:
|
https://doi.org/10.1016/j.patter.2023.100830
|
Project ID:
|
info:eu-repo/grantAgreement/EC/H2020/813533/EU/Machine Learning Frontiers in Precision Medicine/
|
Thanks:
|
We are extremely grateful for Prof. Chris Holmes for his critical reading and valuable comments. We acknowledge the funding received from the European Union's Framework Programme for Research and Innovation Horizon 2020 ...[+]
We are extremely grateful for Prof. Chris Holmes for his critical reading and valuable comments. We acknowledge the funding received from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sk1odowska-Curie Grant agreement no. 813533-MSCA-ITN-2018. I.S. was funded by the "DDAI" COMET Module within the COMET - Competence Centers for Excellent Technologies Programme, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (BMVIT), the Austrian Federal Ministry for Digital and Economic Affairs (BMDW), the Austrian Research Promotion Agency (FFG), the Province of Styria (SFG), and partners from industry and academia. The COMET Program is managed by FFG. Finally, we acknowledge the Big Data Value Association (BDVA), Brussels, Belgium.
[-]
|
Type:
|
Artículo
|