spECK: Accelerating GPU sparse matrix-matrix multiplication through lightweight analysis

Mathias Parger, Martin Winter, Daniel Mlakar, Markus Steinberger

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review


Sparse general matrix-matrix multiplication on GPUs is challenging due to the varying sparsity patterns of sparse matrices. Existing solutions achieve good performance for certain types of matrices, but fail to accelerate all kinds of matrices in the same manner. Our approach combines multiple strategies with dynamic parameter selection to dynamically choose and tune the best fitting algorithm for each row of the matrix. This choice is supported by a lightweight, multilevel matrix analysis, which carefully balances analysis cost and expected performance gains. Our evaluation on thousands of matrices with various characteristics shows that we outperform all currently available solutions in 79% over all matrices with >15k products and that we achieve the second best performance in 15%. For these matrices, our solution is on average 83% faster than the second best approach and up to 25× faster than other state-of-the-art GPU implementations. Using our approach, applications can expect great performance independent of the matrices they work on.

Original languageEnglish
Title of host publicationPPoPP 2020 - Proceedings of the 2020 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming
PublisherAssociation of Computing Machinery
Number of pages14
ISBN (Electronic)9781450368186
ISBN (Print)9781450368186
Publication statusPublished - 19 Feb 2020
EventPPoPP ’20: 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - San Diego, United States
Duration: 22 Feb 202026 Feb 2020


ConferencePPoPP ’20
Country/TerritoryUnited States
CitySan Diego


  • Analysis
  • GPU
  • Sparse Matrix
  • SpGEMM

ASJC Scopus subject areas

  • Software

Cite this