Seminars

View all Seminars  |  Download ICal for this event

Optimizing the Linear Fascicle Evaluation Algorithm for Multi-Core and Many-Core Systems

Series: M.Tech (Research) Thesis Defense

Speaker: Mr. Karan Aggarwal M.Tech (Research) student Dept. of CSA

Date/Time: Mar 05 14:30:00

Location: CSA Seminar Hall (Room No. 254, First Floor)

Faculty Advisor: Prof. Uday Kumar Reddy B

Abstract:
Sparse matrix-vector multiplication (SpMV) operations are commonly used in various scientific
and engineering applications. The performance of the SpMV operation often depends on exploiting
regularity patterns in the matrix. Various new representations and optimization techniques have
been proposed to minimize the memory bandwidth bottleneck arising from the irregular memory
access pattern. Among recent representation techniques, tensor decomposition is a popular one
used for very large but sparse matrices. Post sparse-tensor decomposition, the new representation
involves indirect accesses, making it challenging to optimize for multi-core architectures and
even more demanding for the massively parallel architectures, such as on GPUs.

Computational neuroscience algorithms often involve sparse datasets while still performing
compute-intensive operations. The Linear Fascicle Evaluation (LiFE) application is a popular
neuroscience algorithm used for pruning brain connectivity graphs. The datasets employed herein
involve the Sparse Tucker Decomposition (STD) - a widely used tensor decomposition method. Using
this decomposition leads to multiple irregular array references, making it very difficult to
optimize for multi-cores and GPUs. Recent implementations of the LiFE algorithm show that its SpMV
operations are the key bottleneck for performance and scaling. In this work, we first propose
target-independent techniques such as (1) data restructuring techniques to minimize the effects of
irregular accesses, and (2) simple compiler optimizations. Then we apply target-specific optimizations
to exploit the resources provided by the architecture. The CPU-specific optimizations that we
incorporated are loop tiling, loop parallelization and utilizing BLAS calls to exploit data reuse,
coarse-grained parallelism and fine-grained parallelism respectively. We extend the PolyMage
domain-specific language, embedded in Python, to automate the CPU-based optimizations developed for
this algorithm. Next, we propose various GPU-specific optimizations to optimally map threads at the
granularity of warps, thread blocks and grid, and methods to split the computation among thread blocks
to obtain fine-grained parallelism and data reuse. Our highly optimized and parallelized CPU
implementation obtain a reduction in execution time from 225 min to 8.2 min over the original sequential
approach running on 16-core Intel Xeon Silver (Skylake-based) system. Our optimized GPU implementation
achieves a speedup of 5.2x over a reference optimized GPU code version on NVIDIA's GeForce RTX 2080 Ti GPU,
and a speedup of 9.7x over our highly optimized and parallelized CPU implementation.