- -

Intra-node Memory Safe GPU Co-Scheduling

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Intra-node Memory Safe GPU Co-Scheduling

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Reaño González, Carlos es_ES
dc.contributor.author Silla Jiménez, Federico es_ES
dc.contributor.author Nikolopoulos, Dimitrios S. es_ES
dc.contributor.author Varghese, Blesson es_ES
dc.date.accessioned 2019-07-07T20:01:40Z
dc.date.available 2019-07-07T20:01:40Z
dc.date.issued 2018 es_ES
dc.identifier.issn 1045-9219 es_ES
dc.identifier.uri http://hdl.handle.net/10251/123276
dc.description.abstract [EN] GPUs in High-Performance Computing systems remain under-utilised due to the unavailability of schedulers that can safely schedule multiple applications to share the same GPU. The research reported in this paper is motivated to improve the utilisation of GPUs by proposing a framework, we refer to as schedGPU, to facilitate intra-node GPU co-scheduling such that a GPU can be safely shared among multiple applications by taking memory constraints into account. Two approaches, namely a client-server and a shared memory approach are explored. However, the shared memory approach is more suitable due to lower overheads when compared to the former approach. Four policies are proposed in schedGPU to handle applications that are waiting to access the GPU, two of which account for priorities. The feasibility of schedGPU is validated on three real-world applications. The key observation is that a performance gain is achieved. For single applications, a gain of over 10 times, as measured by GPU utilisation and GPU memory utilisation, is obtained. For workloads comprising multiple applications, a speed-up of up to 5x in the total execution time is noted. Moreover, the average GPU utilisation and average GPU memory utilisation is increased by 5 and 12 times, respectively. es_ES
dc.description.sponsorship This work was funded by Generalitat Valenciana under grant PROMETEO/2017/77. es_ES
dc.language Inglés es_ES
dc.publisher Institute of Electrical and Electronics Engineers es_ES
dc.relation.ispartof IEEE Transactions on Parallel and Distributed Systems es_ES
dc.rights Reserva de todos los derechos es_ES
dc.subject GPU co-scheduling es_ES
dc.subject Access synchronisation es_ES
dc.subject Memory safe es_ES
dc.subject Accelerator es_ES
dc.subject Under-utilisation es_ES
dc.subject SchedGPU es_ES
dc.subject.classification ARQUITECTURA Y TECNOLOGIA DE COMPUTADORES es_ES
dc.title Intra-node Memory Safe GPU Co-Scheduling es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.1109/TPDS.2017.2784428 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/GVA//PROMETEO%2F2017%2F077/ es_ES
dc.rights.accessRights Abierto es_ES
dc.contributor.affiliation Universitat Politècnica de València. Departamento de Informática de Sistemas y Computadores - Departament d'Informàtica de Sistemes i Computadors es_ES
dc.description.bibliographicCitation Reaño González, C.; Silla Jiménez, F.; Nikolopoulos, DS.; Varghese, B. (2018). Intra-node Memory Safe GPU Co-Scheduling. IEEE Transactions on Parallel and Distributed Systems. 29(5):1089-1102. https://doi.org/10.1109/TPDS.2017.2784428 es_ES
dc.description.accrualMethod S es_ES
dc.relation.publisherversion http://doi.org/10.1109/TPDS.2017.2784428 es_ES
dc.description.upvformatpinicio 1089 es_ES
dc.description.upvformatpfin 1102 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 29 es_ES
dc.description.issue 5 es_ES
dc.relation.pasarela S\349037 es_ES
dc.contributor.funder Generalitat Valenciana es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem