A Performance Model For Gpu Architectures: Analysis And Design Of Fundamental Algorithms
Date
2018-05
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
Over the past decade, \many-core" architectures have become a crucial resources for solving com-
putationally challenging problems. These systems rely on hundreds or thousands of simple compute
cores to achieve high computational throughput. However, their divergence from traditional CPUs
makes existing algorithms and models inapplicable. Thus, developers rely on a set of heuristics and
hardware-dependent rules of thumb to develop algorithms for many-core systems, such as GPUs.
This dissertation attempts to remedy this by presenting the Synchronous Parallel Throughput
(SPT) model, a general performance model that aims to capture the factors that most impact
algorithm performance on many-core architectures. The model focuses on two factors that often
create performance bottlenecks: memory latency and synchronization overhead. We instantiate
the SPT model on three separate modern GPU platforms using a series of microbenchmarks that
measure hardware parameters such as memory access latency and peak bandwidth. We further
show how multiplicity aects performance by hiding latencies and increasing overall throughput.
We consider three fundamental problems as case studies in this dissertation: general matrix-matrix
multiplication, searching, and sorting. Using our SPT model, we analyze a state-of-the-art library
implementation of a matrix multiplication algorithm and show that our model can generate run-
time estimates with an average error of 5% across our GPU platforms. We then consider the
problem of batched predecessor search in the context of two levels of the GPU memory hierarchy.
In slow, global memory, we demonstrate the accuracy of the SPT model, while in fast, shared
memory, we determine that the memory access patterns create a performance bottleneck that de-
grades performance. We develop a new searching algorithm that improves the access pattern and
increases performance by up to 293% on our GPUs. Finally, we look at comparison-based sort-
ing on GPUs by analyzing two state-of-the-art algorithms and, using our SPT model, determine
that they each suer from bottlenecks that stie performance. With these bottlenecks in mind,
we develop GPU-MMS, our GPU-ecient multiway mergesort algorithm, and demonstrate that it
outperforms highly optimized library implementations of existing algorithms by an average of 21%
when sorting random inputs and up to 67% on worst-case input permutations. These case studies
demonstrate both the accuracy and applicability of the SPT model for analyzing and developing
GPU-ecient algorithms.
Description
Keywords
parallel, algorithms, many-core, GPU, sorting, model
Citation
Extent
Format
Geographic Location
Time Period
Related To
Related To (URI)
Table of Contents
Rights
All UHM dissertations and theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission from the copyright owner.
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.