Mostrar el registro sencillo del ítem
dc.contributor.author | Reaño González, Carlos | es_ES |
dc.contributor.author | Silla Jiménez, Federico | es_ES |
dc.date.accessioned | 2021-01-14T04:32:14Z | |
dc.date.available | 2021-01-14T04:32:14Z | |
dc.date.issued | 2019-05 | es_ES |
dc.identifier.issn | 0743-7315 | es_ES |
dc.identifier.uri | http://hdl.handle.net/10251/158937 | |
dc.description.abstract | [EN] Although CPUs are being widely adopted in order to noticeably reduce the execution time of many applications, their use presents several side effects such as an increased acquisition cost of the cluster nodes or an increased overall energy consumption. To address these concerns, GPU virtualization frameworks could be used. These frameworks allow accelerated applications to transparently use GPUs located in cluster nodes other than the one executing the program. Furthermore, these frameworks aim to offer the same API as the NVIDIA CUDA Runtime API does, although different frameworks provide different degree of support. In general, and because of the complexity of implementing an efficient mechanism, none of the existing frameworks provides support for memory copies between remote GPUs located in different nodes. In this paper we introduce an efficient mechanism devised for addressing the support for this kind of memory copies among GPUs located in different cluster nodes. Several options are explored and analyzed, such as the use of the GPUDirect RDMA mechanism. We focus our discussion on the rCUDA remote GPU virtualization framework. Results show that is possible to implement this kind of memory copies in such an efficient way that performance is even improved with respect to the original performance attained by CUDA when GPUs located in the same cluster node are leveraged. | es_ES |
dc.description.sponsorship | This work was funded by the Generalitat Valenciana under Grant PROMETEO/2017/077. Authors are also grateful for the generous support provided by Mellanox Technologies Inc. | es_ES |
dc.language | Inglés | es_ES |
dc.publisher | Elsevier | es_ES |
dc.relation.ispartof | Journal of Parallel and Distributed Computing | es_ES |
dc.rights | Reconocimiento - No comercial - Sin obra derivada (by-nc-nd) | es_ES |
dc.subject | CUDA | es_ES |
dc.subject | Virtualization | es_ES |
dc.subject | GPUDirect RDMA | es_ES |
dc.subject.classification | ARQUITECTURA Y TECNOLOGIA DE COMPUTADORES | es_ES |
dc.title | On the support of inter-node P2P GPU memory copies in rCUDA | es_ES |
dc.type | Artículo | es_ES |
dc.identifier.doi | 10.1016/j.jpdc.2018.12.011 | es_ES |
dc.relation.projectID | info:eu-repo/grantAgreement/GVA//PROMETEO%2F2017%2F077/ | es_ES |
dc.rights.accessRights | Abierto | es_ES |
dc.contributor.affiliation | Universitat Politècnica de València. Departamento de Informática de Sistemas y Computadores - Departament d'Informàtica de Sistemes i Computadors | es_ES |
dc.description.bibliographicCitation | Reaño González, C.; Silla Jiménez, F. (2019). On the support of inter-node P2P GPU memory copies in rCUDA. Journal of Parallel and Distributed Computing. 127:28-43. https://doi.org/10.1016/j.jpdc.2018.12.011 | es_ES |
dc.description.accrualMethod | S | es_ES |
dc.relation.publisherversion | https://doi.org/10.1016/j.jpdc.2018.12.011 | es_ES |
dc.description.upvformatpinicio | 28 | es_ES |
dc.description.upvformatpfin | 43 | es_ES |
dc.type.version | info:eu-repo/semantics/publishedVersion | es_ES |
dc.description.volume | 127 | es_ES |
dc.relation.pasarela | S\412523 | es_ES |
dc.contributor.funder | Generalitat Valenciana | es_ES |