Exploring DL Compiler Optimizations with TVM


Student Name: Shriraj K. Vaidya
Defense Date:
Location: Nichols Hall, Room 246 (Executive Conference Room)
Chair: Prasad Kulkarni

Dongjie Wang

Zijun Yao

Abstract:

Deep Learning (DL) compilers, also called Machine Learning (ML) compilers, take a computational graph representation of a ML model as input and apply graph-level and operator-level optimizations to generate optimized machine-code for different supported hardware architectures. DL compilers can apply several graph-level optimizations, including operator fusion, constant folding, and data layout transformations to convert the input computation graph into a functionally equivalent and optimized variant. The DL compilers also perform kernel scheduling, which is the task of finding the most efficient implementation for the operators in the computational graph. While many research efforts have focused on exploring different kernel scheduling techniques and algorithms, the benefits of individual computation graph-level optimizations are not as well studied. In this work, we employ the TVM compiler to perform a comprehensive study of the impact of different graph-level optimizations on the performance of DL models on CPUs and GPUs. We find that TVM's graph optimizations can improve model performance by up to 41.73% on CPUs and 41.6% on GPUs, and by 16.75% and 21.89%, on average, on CPUs and GPUs, respectively, on our custom benchmark suite.

Degree: MS Thesis Defense (CS)
Degree Type: MS Thesis Defense
Degree Field: Computer Science