A high-performance matrix–matrix multiplication methodology for CPU and GPU architectures

Iosif Mporas, Vasilios Kelefouras, Angeliki Kritikakou, Vasilios Kolonias

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
134 Downloads (Pure)

Abstract

Current compilers cannot generate code that can compete with hand-tuned code in efficiency, even for a simple kernel like matrix–matrix multiplication (MMM). A key step in program optimization is the estimation of optimal values for parameters such as tile sizes and number of levels of tiling. The scheduling parameter values selection is a very difficult and time-consuming task, since parameter values depend on each other; this is why they are found by using searching methods and empirical techniques. To overcome this problem, the scheduling sub-problems must be optimized together, as one problem and not separately. In this paper, an MMM methodology is presented where the optimum scheduling parameters are found by decreasing the search space theoretically, while the major scheduling sub-problems are addressed together as one problem and not separately according to the hardware architecture parameters and input size; for different hardware architecture parameters and/or input sizes, a different implementation is produced. This is achieved by fully exploiting the software characteristics (e.g., data reuse) and hardware architecture parameters (e.g., data caches sizes and associativities), giving high-quality solutions and a smaller search space. This methodology refers to a wide range of CPU and GPU architectures.
Original languageEnglish
Pages (from-to)804-844
Number of pages41
JournalJournal of Supercomputing
Volume72
Issue number3
Early online date22 Jan 2016
DOIs
Publication statusPublished - 1 Mar 2016

Fingerprint

Dive into the research topics of 'A high-performance matrix–matrix multiplication methodology for CPU and GPU architectures'. Together they form a unique fingerprint.

Cite this