Low precision matrix multiplication for efficient deep learning in NVIDIA Carmel processors

Handle

https://riunet.upv.es/handle/10251/189610

Cita bibliográfica

San Juan-Sebastian, P.; Rodríguez-Sánchez, R.; Igual, FD.; Alonso-Jordá, P.; Quintana-Ortí, ES. (2021). Low precision matrix multiplication for efficient deep learning in NVIDIA Carmel processors. The Journal of Supercomputing. 77(10):11257-11269. https://doi.org/10.1007/s11227-021-03636-4

Titulación

Resumen

[EN] We introduce a high performance, multi-threaded realization of the gemm kernel for the ARMv8.2 architecture that operates with 16-bit (half precision)/queryKindly check and confirm whether the corresponding author is correctly identified. floating point operands. Our code is especially designed for efficient machine learning inference (and to a certain extent, also training) with deep neural networks. The results on the NVIDIA Carmel multicore processor, which implements the ARMv8.2 architecture, show considerable performance gains for the gemm kernel, close to the theoretical peak acceleration that could be expected when moving from 32-bit arithmetic/data to 16-bit. Combined with the type of convolution operator arising in convolutional neural networks, the speed-ups are more modest though still relevant.

Palabras clave

Deep learning, Matrix multiplication, High performance, NVIDIA Carmel system-on-chip (SoC)

ISSN

0920-8542

ISBN

Fuente

The Journal of Supercomputing

DOI

10.1007/s11227-021-03636-4