Mostrar el registro sencillo del ítem
dc.contributor.author | Palacios-Morocho, Maritza Elizabeth | es_ES |
dc.contributor.author | Inca, Saúl | es_ES |
dc.contributor.author | Monserrat del Río, Jose Francisco | es_ES |
dc.date.accessioned | 2024-06-26T18:11:47Z | |
dc.date.available | 2024-06-26T18:11:47Z | |
dc.date.issued | 2023-10 | es_ES |
dc.identifier.issn | 0018-9545 | es_ES |
dc.identifier.uri | http://hdl.handle.net/10251/205512 | |
dc.description.abstract | [EN] Autonomous navigation is a well-studied field in robotics requiring high standards of efficiency and reliability. Many studies focus on applying AI techniques to obtain a high-quality map, a precise localization, or improve the proposed trajectory to be followed by the agent. As traditional planning methods need a high-quality map to obtain optimal trajectories, this paper addresses the problem of multipath map-less planning, and proposes a novel multipath planning algorithm (Double Deep Reinforcement Learning - Enhanced Genetic (DDRL-EG)) for mobile robots in an unknown environment. It combines Double Deep Reinforcement Learning (DDRL) with Heuristic Knowledge (HK), Experience Replay (ER), Genetic Algorithm (GA), and Dynamic Programming (DP), allowing the agent to reach its target successfully without maps. In addition, it optimizes the training time and the chosen path in terms of time and distance to the target. A hybrid method is also used in which Semi-Uniform Distributed Exploration (SUDE) is employed to determine the probability that the action is decided based on directed knowledge, hybrid knowledge, or autonomous knowledge. The performance of DDRL-EG is compared with two other algorithms in two different environments. The results show that DDRL-EG is a more robust and powerful algorithm since with less training, it can provide much smoother and shorter trajectories to the target. | es_ES |
dc.description.sponsorship | The work of Elizabeth Palacios was supported by the Research andDevelopment Grants Program (PAID-01-19) of the Universitat Politecnica de Valencia. | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Institute of Electrical and Electronics Engineers | es_ES |
dc.relation.ispartof | IEEE Transactions on Vehicular Technology | es_ES |
dc.rights | Reserva de todos los derechos | es_ES |
dc.subject | Reinforcement learning | es_ES |
dc.subject | Dynamic programming | es_ES |
dc.subject | Prioritized experience | es_ES |
dc.subject | Heuristic knowledge | es_ES |
dc.subject | Genetic algorithm | es_ES |
dc.subject.classification | TEORÍA DE LA SEÑAL Y COMUNICACIONES | es_ES |
dc.title | Multipath Planning Acceleration Method With Double Deep R-Learning Based on a Genetic Algorithm | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1109/TVT.2023.3277981 | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/UPV//PAID-01-19-18//5G-SMART 5G for Smart Manufacturing/ | es_ES |
dc.rights.accessRights | Abierto | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Escuela Técnica Superior de Ingenieros de Telecomunicación - Escola Tècnica Superior d'Enginyers de Telecomunicació | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Instituto Universitario de Telecomunicación y Aplicaciones Multimedia - Institut Universitari de Telecomunicacions i Aplicacions Multimèdia | es_ES |
dc.description.bibliographicCitation | Palacios-Morocho, ME.; Inca, S.; Monserrat Del Río, JF. (2023). Multipath Planning Acceleration Method With Double Deep R-Learning Based on a Genetic Algorithm. IEEE Transactions on Vehicular Technology. 72(10):12681-12696. https://doi.org/10.1109/TVT.2023.3277981 | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | https://doi.org/10.1109/TVT.2023.3277981 | es_ES |
dc.description.upvformatpinicio | 12681 | es_ES |
dc.description.upvformatpfin | 12696 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 72 | es_ES |
dc.description.issue | 10 | es_ES |
dc.relation.pasarela | S\494423 | es_ES |
dc.contributor.funder | UNIVERSIDAD POLITECNICA DE VALENCIA | es_ES |
upv.costeAPC | 1981.3 | es_ES |