Accelerating Reduction and Scan Using Tensor Core Units
Publication Year:
  Abdul Dakkak, Cheng Li, Jinjun Xiong, Wen-mei Hwu
  CoRR, abs/1811.09736.

Driven by deep learning, there has been a surge of specialized processors for matrix multiplication, referred to as TensorCore Units (TCUs). These TCUs are capable of performing matrix multiplications on small matrices (usually 4x4 or 16x16) to accelerate the convolutional and recurrent neural networks in deep learning workloads. In this paper we leverage NVIDIA's TCU to express both reduction and scan with matrix multiplication and show the benefits -- in terms of program simplicity, efficiency, and performance. Our algorithm exercises the NVIDIA TCUs which would otherwise be idle, achieves 89%-98% of peak memory copy bandwidth, and is orders of magnitude faster (up to 100x for reduction and 3x for scan) than state-of-the-art methods for small segment sizes -- common in machine learning and scientific applications. Our algorithm achieves this while decreasing the power consumption by up to 22% for reduction and 16% for scan.
title={Accelerating Reduction and Scan Using Tensor Core Units},
author={Abdul Dakkak and Cheng Li and Isaac Gelado and Jinjun Xiong and Wen-mei W. Hwu},