- -

Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Varela-Martínez, Pau es_ES
dc.contributor.author Suárez Morales, Pol es_ES
dc.contributor.author Alcántara-Ávila, Francisco es_ES
dc.contributor.author Miró, Arnau es_ES
dc.contributor.author Rabault, Jean es_ES
dc.contributor.author Font, Bernat es_ES
dc.contributor.author García-Cuevas González, Luis Miguel es_ES
dc.contributor.author Lehmkuhl, Oriol es_ES
dc.contributor.author Vinuesa, Ricardo es_ES
dc.date.accessioned 2023-06-06T18:02:05Z
dc.date.available 2023-06-06T18:02:05Z
dc.date.issued 2022-12 es_ES
dc.identifier.uri http://hdl.handle.net/10251/193929
dc.description.abstract [EN] The increase in emissions associated with aviation requires deeper research into novel sensing and flow-control strategies to obtain improved aerodynamic performances. In this context, data-driven methods are suitable for exploring new approaches to control the flow and develop more efficient strategies. Deep artificial neural networks (ANNs) used together with reinforcement learning, i.e., deep reinforcement learning (DRL), are receiving more attention due to their capabilities of controlling complex problems in multiple areas. In particular, these techniques have been recently used to solve problems related to flow control. In this work, an ANN trained through a DRL agent, coupled with the numerical solver Alya, is used to perform active flow control. The Tensorforce library was used to apply DRL to the simulated flow. Two-dimensional simulations of the flow around a cylinder were conducted and an active control based on two jets located on the walls of the cylinder was considered. By gathering information from the flow surrounding the cylinder, the ANN agent is able to learn through proximal policy optimization (PPO) effective control strategies for the jets, leading to a significant drag reduction. Furthermore, the agent needs to account for the coupled effects of the friction- and pressure-drag components, as well as the interaction between the two boundary layers on both sides of the cylinder and the wake. In the present work, a Reynolds number range beyond those previously considered was studied and compared with results obtained using classical flow-control methods. Significantly different forms of nature in the control strategies were identified by the DRL as the Reynolds number Re increased. On the one hand, for Re¿1000, the classical control strategy based on an opposition control relative to the wake oscillation was obtained. On the other hand, for Re=2000, the new strategy consisted of energization of the boundary layers and the separation area, which modulated the flow separation and reduced the drag in a fashion similar to that of the drag crisis, through a high-frequency actuation. A cross-application of agents was performed for a flow at Re=2000, obtaining similar results in terms of the drag reduction with the agents trained at Re=1000 and 2000. The fact that two different strategies yielded the same performance made us question whether this Reynolds number regime (Re=2000) belongs to a transition towards a nature-different flow, which would only admits a high-frequency actuation strategy to obtain the drag reduction. At the same time, this finding allows for the application of ANNs trained at lower Reynolds numbers, but are comparable in nature, saving computational resources. es_ES
dc.description.sponsorship The authors acknowledge the contribution of Maxence Deferrez to this work. R.V. acknowledges funding from the ERC through grant no. 2021-CoG-101043998, DEEPCONTROL es_ES
dc.language Inglés es_ES
dc.publisher MDPI AG es_ES
dc.relation.ispartof Actuators es_ES
dc.rights Reconocimiento (by) es_ES
dc.subject Numerical simulation es_ES
dc.subject Wake dynamics es_ES
dc.subject Flow control es_ES
dc.subject Machine learning es_ES
dc.subject Deep reinforcement learning es_ES
dc.subject.classification INGENIERIA AEROESPACIAL es_ES
dc.title Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.3390/act11120359 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/ERC//2021-CoG-101043998/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Escuela Técnica Superior de Ingeniería del Diseño - Escola Tècnica Superior d'Enginyeria del Disseny es_ES
dc.description.bibliographicCitation Varela-Martínez, P.; Suárez Morales, P.; Alcántara-Ávila, F.; Miró, A.; Rabault, J.; Font, B.; García-Cuevas González, LM.... (2022). Deep Reinforcement Learning for Flow Control Exploits Different Physics for Increasing Reynolds Number Regimes. Actuators. 11(12):1-24. https://doi.org/10.3390/act11120359 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion https://doi.org/10.3390/act11120359 es_ES
dc.description.upvformatpinicio 1 es_ES
dc.description.upvformatpfin 24 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 11 es_ES
dc.description.issue 12 es_ES
dc.identifier.eissn 2076-0825 es_ES
dc.relation.pasarela S\478794 es_ES
dc.contributor.funder European Research Council es_ES
dc.subject.ods 07.- Asegurar el acceso a energías asequibles, fiables, sostenibles y modernas para todos es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem