Mostrar el registro sencillo del ítem
dc.contributor.author | Izquierdo-Doménech, Juan | es_ES |
dc.contributor.author | Linares-Pellicer, Jordi | es_ES |
dc.contributor.author | Orta-López, Jorge | es_ES |
dc.date.accessioned | 2024-04-11T06:28:32Z | |
dc.date.available | 2024-04-11T06:28:32Z | |
dc.date.issued | 2023-04 | es_ES |
dc.identifier.issn | 1380-7501 | es_ES |
dc.identifier.uri | http://hdl.handle.net/10251/203289 | |
dc.description.abstract | [EN] With its various available frameworks and possible devices, augmented reality is a proven useful tool in various industrial processes such as maintenance, repairing, training, reconfiguration, and even monitoring tasks of production lines in large factories. Despite its advantages, augmented reality still does not usually give meaning to the elements it complements, staying in a physical or geometric layer of its environment and without providing information that may be of great interest to industrial operators in carrying out their work. An expert¿s remote human assistance is becoming an exciting complement in these environments, but this is expensive or even impossible in many cases. This paper shows how a machine learning semantic layer can complement augmented reality solutions in the industry by providing an intelligent layer, sometimes even beyond some expert¿s skills. This layer, using state-of-the-art models, can provide visual validation and new inputs, natural language interaction, and automatic anomaly detection. All this new level of semantic context can be integrated into almost any current augmented reality system, improving the operator¿s job with additional contextual information, new multimodal interaction and validation, increasing their work comfort, operational times, and security. | es_ES |
dc.description.sponsorship | Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Springer-Verlag | es_ES |
dc.relation.ispartof | Multimedia Tools and Applications | es_ES |
dc.rights | Reconocimiento (by) | es_ES |
dc.subject | Augmented reality | es_ES |
dc.subject | Semantics | es_ES |
dc.subject | Deep learning | es_ES |
dc.subject | Industry | es_ES |
dc.subject | CNN | es_ES |
dc.subject | Transformers | es_ES |
dc.subject | Multimodal interaction | es_ES |
dc.subject.classification | LENGUAJES Y SISTEMAS INFORMATICOS | es_ES |
dc.title | Towards achieving a high degree of situational awareness and multimodal interaction with AR and semantic AI in industrial applications | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1007/s11042-022-13803-1 | es_ES |
dc.rights.accessRights | Abierto | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Escuela Politécnica Superior de Alcoy - Escola Politècnica Superior d'Alcoi | es_ES |
dc.description.bibliographicCitation | Izquierdo-Doménech, J.; Linares-Pellicer, J.; Orta-López, J. (2023). Towards achieving a high degree of situational awareness and multimodal interaction with AR and semantic AI in industrial applications. Multimedia Tools and Applications. 82(10):15875-15901. https://doi.org/10.1007/s11042-022-13803-1 | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | https://doi.org/10.1007/s11042-022-13803-1 | es_ES |
dc.description.upvformatpinicio | 15875 | es_ES |
dc.description.upvformatpfin | 15901 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 82 | es_ES |
dc.description.issue | 10 | es_ES |
dc.relation.pasarela | S\471935 | es_ES |
dc.contributor.funder | Universitat Politècnica de València | es_ES |