TY - JOUR
T1 - A high-performance matrix–matrix multiplication methodology for CPU and GPU architectures
AU - Mporas, Iosif
AU - Kelefouras, Vasilios
AU - Kritikakou, Angeliki
AU - Kolonias, Vasilios
N1 - This is the Accepted Manuscript version of the following article: V. Kelefouras, A Kritikakou I. Mporas, V. Kolonias, “A high-performance matrix–matrix multiplication methodology for CPU and GPU architectures”, The Journal of Supercomputing, Vol. 72 (3): 804-844, January 2016.
The final published version is available at: https://link.springer.com/article/10.1007%2Fs11227-015-1613-7
© Springer Science+Business Media New York 2016
PY - 2016/3/1
Y1 - 2016/3/1
N2 - Current compilers cannot generate code that can compete with hand-tuned code in efficiency, even for a simple kernel like matrix–matrix multiplication (MMM). A key step in program optimization is the estimation of optimal values for parameters such as tile sizes and number of levels of tiling. The scheduling parameter values selection is a very difficult and time-consuming task, since parameter values depend on each other; this is why they are found by using searching methods and empirical techniques. To overcome this problem, the scheduling sub-problems must be optimized together, as one problem and not separately. In this paper, an MMM methodology is presented where the optimum scheduling parameters are found by decreasing the search space theoretically, while the major scheduling sub-problems are addressed together as one problem and not separately according to the hardware architecture parameters and input size; for different hardware architecture parameters and/or input sizes, a different implementation is produced. This is achieved by fully exploiting the software characteristics (e.g., data reuse) and hardware architecture parameters (e.g., data caches sizes and associativities), giving high-quality solutions and a smaller search space. This methodology refers to a wide range of CPU and GPU architectures.
AB - Current compilers cannot generate code that can compete with hand-tuned code in efficiency, even for a simple kernel like matrix–matrix multiplication (MMM). A key step in program optimization is the estimation of optimal values for parameters such as tile sizes and number of levels of tiling. The scheduling parameter values selection is a very difficult and time-consuming task, since parameter values depend on each other; this is why they are found by using searching methods and empirical techniques. To overcome this problem, the scheduling sub-problems must be optimized together, as one problem and not separately. In this paper, an MMM methodology is presented where the optimum scheduling parameters are found by decreasing the search space theoretically, while the major scheduling sub-problems are addressed together as one problem and not separately according to the hardware architecture parameters and input size; for different hardware architecture parameters and/or input sizes, a different implementation is produced. This is achieved by fully exploiting the software characteristics (e.g., data reuse) and hardware architecture parameters (e.g., data caches sizes and associativities), giving high-quality solutions and a smaller search space. This methodology refers to a wide range of CPU and GPU architectures.
U2 - 10.1007/s11227-015-1613-7
DO - 10.1007/s11227-015-1613-7
M3 - Article
SN - 0920-8542
VL - 72
SP - 804
EP - 844
JO - Journal of Supercomputing
JF - Journal of Supercomputing
IS - 3
ER -