KU EECS Research Used in Nvidia's Latest AI Accelerator Release


Nvidia revealed in their official developer blog that it uses FireSim-NVDLA, a KU EECS research outcome, to evaluate the software release for their open-source deep neural network hardware accelerator, called Nvidia Deep Learning Accelerator (NVDLA).

Schematic of Nvidia AI AcceleratorFireSim-NVDLA is an outcome of research by a KU EECS PhD student Farzad Farshchi, his advisor Associate Professor Heechul Yun, and a collaborator Qijing Huang at the University of California, Berkeley. FireSim-NVDLA integrates the NVDLA accelerator with a RISC-V based multicore CPU on Amazon EC2 cloud (F1 instance). The research is described in the paper “Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim”, which was published at the Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (EMC^2) in February, 2019.

“We believe that FireSim-NVDLA opens up new opportunities for industry practitioners and academic researchers to explore the possibility of using deep learning hardware accelerators in emerging applications such as smart IoT devices and intelligent robots. Nvidia’s use of our research is an indication of the quality of our work,” says Yun.