- -

Exploring explainable AI: category theory insights into machine learning algorithms

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Exploring explainable AI: category theory insights into machine learning algorithms

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Fabregat-Hernandez, Ares es_ES
dc.contributor.author Palanca Cámara, Javier es_ES
dc.contributor.author Botti, V. es_ES
dc.date.accessioned 2024-10-03T18:26:29Z
dc.date.available 2024-10-03T18:26:29Z
dc.date.issued 2023-12-01 es_ES
dc.identifier.uri http://hdl.handle.net/10251/209274
dc.description.abstract [EN] Explainable artificial intelligence (XAI) is a growing field that aims to increase the transparency and interpretability of machine learning (ML) models. The aim of this work is to use the categorical properties of learning algorithms in conjunction with the categorical perspective of the information in the datasets to give a framework for explainability. In order to achieve this, we are going to define the enriched categories, with decorated morphisms, LearnLearn , ParaPara and MNetMNet of learners, parameterized functions, and neural networks over metric spaces respectively. The main idea is to encode information from the dataset via categorical methods, see how it propagates, and lastly, interpret the results thanks again to categorical (metric) information. This means that we can attach numerical (computable) information via enrichment to the structural information of the category. With this, we can translate theoretical information into parameters that are easily understandable. We will make use of different categories of enrichment to keep track of different kinds of information. That is, to see how differences in attributes of the data are modified by the algorithm to result in differences in the output to achieve better separation. In that way, the categorical framework gives us an algorithm to interpret what the learning algorithm is doing. Furthermore, since it is designed with generality in mind, it should be applicable in various different contexts. There are three main properties of category theory that help with the interpretability of ML models: formality, the existence of universal properties, and compositionality. The last property offers a way to combine smaller, simpler models that are easily understood to build larger ones. This is achieved by formally representing the structure of ML algorithms and information contained in the model. Finally, universal properties are a cornerstone of category theory. They help us characterize an object, not by its attributes, but by how it interacts with other objects. Thus, we can formally characterize an algorithm by how it interacts with the data. The main advantage of the framework is that it can unify under the same language different techniques used in XAI. Thus, using the same language and concepts we can describe a myriad of techniques and properties of ML algorithms, streamlining their explanation and making them easier to generalize and extrapolate. es_ES
dc.description.sponsorship This work is partially supported by the TAILOR project, a project funded by the EU Horizon 2020 research and innovation programme under GA No. 952215, and by the GUARDIA project, a project funded by Generalitat Valenciana GVA-CEICE Project PROMETEO/2018/002. es_ES
dc.language Inglés es_ES
dc.publisher IOP Publishing es_ES
dc.relation.ispartof Machine Learning: Science and Technology es_ES
dc.rights Reconocimiento (by) es_ES
dc.subject Explainability es_ES
dc.subject Category theory es_ES
dc.subject Lipschitz functions es_ES
dc.subject Yoneda embedding es_ES
dc.subject Compositionality es_ES
dc.subject.classification LENGUAJES Y SISTEMAS INFORMATICOS es_ES
dc.title Exploring explainable AI: category theory insights into machine learning algorithms es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.1088/2632-2153/ad1534 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/952215/EU/Integrating Reasoning, Learning and Optimization/ es_ES
dc.relation.projectID info:eu-repo/grantAgreement/GVA//PROMETEO%2F2018%2F002//TECNOLOGIES PER ORGANITZACIONS HUMANES EMOCIONALS/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Escuela Politécnica Superior de Gandia - Escola Politècnica Superior de Gandia es_ES
dc.contributor.affiliation Universitat Politècnica de València. Escola Tècnica Superior d'Enginyeria Informàtica es_ES
dc.description.bibliographicCitation Fabregat-Hernandez, A.; Palanca Cámara, J.; Botti, V. (2023). Exploring explainable AI: category theory insights into machine learning algorithms. Machine Learning: Science and Technology. 4(4). https://doi.org/10.1088/2632-2153/ad1534 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion https://doi.org/10.1088/2632-2153/ad1534 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 4 es_ES
dc.description.issue 4 es_ES
dc.identifier.eissn 2632-2153 es_ES
dc.relation.pasarela S\508640 es_ES
dc.contributor.funder European Commission es_ES
dc.contributor.funder Generalitat Valenciana es_ES
upv.costeAPC 3012.9 es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem