Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Past Defense Notices

Dates

Samyak Jain

Monkeypox Detection Using Computer Vision

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson, (Co-Chair)
Hongyang Sun


Abstract

As the world recovers from the damage caused by the spread of COVID-19, the monkeypox virus poses a new threat of becoming a global pandemic. The monkeypox virus itself is not as deadly or contagious as COVID-19, but many countries report new patient cases every day. So it wouldn't be surprising if the world faces another pandemic due to lack of proper precautions. Recently, deep learning has shown great potential in image-based diagnostics, such as cancer detection, tumor cell identification, and COVID-19 patient detection. Therefore, since monkeypox has infected human skin, a similar application can be employed in diagnosing monkeypox-related diseases, and this image can be captured and used for further disease diagnosis. This project presents a deep learning approach for detecting monkeypox disease from skin lesion images. Several pre-trained deep learning models, such as ResNet50 and Mobilenet, are deployed on the dataset to classify monkeypox and other diseases.


Grace Young

Quantum Algorithms & the Hidden Subgroup Problem

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Matthew Moore, Chair
Perry Alexander
Esam El-Araby
Cuncong Zhong
KC Kong

Abstract

In the last century, we have seen incredible growth in the field of quantum computing. Quantum computing offers us the opportunity to find efficient solutions to certain computational problems which are intractable on classical computers. One class of problems that seems to benefit from quantum computing is the Hidden Subgroup Problem (HSP). In the following proposal, we will examine basics of quantum computing as well as the current research surrounding the HSP. We will also discuss the importance of the HSP and its relation to other popular problems such as Integer Factoring, Discrete Logarithm, Shortest Vector, and Subset Sum problems.

The proposed research aims to develop a quantum algorithmic polynomial-time reduction to special cases of the HSP where the parameterizing group is the Dihedral group. This problem is known as the Dihedral HSP (DHSP). The usual approach to the HSP relies on harmonic analysis in the domain of the problem and the best-known algorithm using this approach is sub-exponential, but still super-polynomial. The algorithm we have designed focuses on the structure encoded in the codomain which uses this structure to direct a “walk” down the subgroup lattice terminating at the hidden subgroup.

 


Victor Alberto Lopez Nikolskiy

Maximum Power Point Tracking For Solar Harvesting Using Industry Implementation Of Perturb And Observe with Integrated Circuits

When & Where:


Eaton Hall, Room 2001B

Committee Members:

James Stiles, Chair
Christopher Allen
Patrick McCormick


Abstract

This project is not a new idea or an innovative method, this project consists in the implementation of techniques already used in the consumer industry.

The purpose of this project is to implement a compact and low-weight Maximum Power Point Tracking (MPPT) Solar Harvesting Device intended for a small fixed-wing unmanned aircraft. For the aircraft selected, the load could be subsidized up to 25% by the MPPT device and installed solar cells.

The MPPT device was designed around the Texas Instruments SM72445 Integrated Circuit and its technical documentation. The prototype was evaluated using a Photovoltaic Profile Emulator Power Supply and a LiPo battery.

The device performed MPPT in one of the different tested current-voltage (IV) profiles reaching Maximum Power Point (MPP).  The device did not maintain the MPP. Under an additional external DC load or different IV profiles, the emulator operates in prohibited operating conditions. The probable cause of the failed behavior is due to instability in the emulator’s output. The inputs to the controller and response behaviors of the H-bridge circuit were as expected and designed.


Koyel Pramanick

Detection of measures devised by the compiler to improve security of the generated code

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

The main aim of the thesis is to identify provisions employed by the compiler to ensure the security of any arbitrary binary. These provisions are security techniques applied automatically by the compiler during the system build process. Compilers provide a number of security checks, that can be applied statically or at compile time, to protect the software from attacks that target code vulnerabilities. Most compilers use warnings to indicate potential code bugs and run-time security checks which add instrumentation code in the binary to detect problems during execution. Our first work is to develop a language-agnostic and compiler-agnostic experimental framework which determines the presence of targeted compiler-based run-time security checks in any binary. Our next work is focused on exploring if unresolved compiler generated warnings can be detected in the binary when the source code is not available.


Ben Liu

Computational Microbiome Analysis: Method Development, Integration and Clinical Applications

When & Where:


Eaton Hall, Room 1

Committee Members:

Cuncong Zhong, Chair
Esam El-Araby
Bo Luo
Zijun Yao
Mizuki Azuma

Abstract

Metagenomics is the study of microbial genomes from one common environment and in most cases, metagenomic data refer to the whole-genome shotgun sequencing data of the microbiota, which are fragmented DNA sequences from all regions in the microbial genomes. Because the data are generated without laboratory culture, they provide a more unbiased insight to and uniquely enriched information of the microbial community. Currently many researchers are interested in metagenomic data, and a sea of software exist for various purposes at different analyzing stages. Most researchers build their own analyzing pipeline on their expertise, and the pipelines for the same purpose built by two researchers might be disparate, thus affecting the conclusion of experiment. 

My research interests involve: (1) We first developed an assembly graph-based ncRNA searching tools, named DRAGoM, to improve the searching quality in metagenomic data. (2) We proposed an automatic metagenomic data analyzing pipeline generation system to extract, organize and exploit the enormous amount of knowledge available in literature. The system consists of two work procedures: construction and generation. In the construction procedure, the system takes a corpus of raw textual data, and updates the constructed pipeline network, whereas in the genera- tion stage, the system recommends analyzing pipeline based on the user input. (3) We performed a meta-analysis on the taxonomic and functional features of the gut microbiome from non-small cell lung cancer patients treated with immunotherapy, to establish a model to predict if a patient will benefit from immunotherapy. We systematically studied the taxonomical characteristics of the dataset and used both random forest and multilayer perceptron neural network models to predict the patients with progressing-free survival above 6 months versus those below 3 months.


Matthew Showers

Software-based Runtime Protection of Secret Assets in Untrusted Hardware under Zero Trust

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Tamzidul Hoque, Chair
Alex Bardas
Drew Davidson


Abstract

The complexity of the design and fabrication process of electronic devices is advancing with their ability to provide wide-ranging functionalities including data processing, sensing, communication, artificial intelligence, and security. Due to these complexities in the design and manufacturing process and associated time and cost, system developers often prefer to procure off-the-shelf components directly from the market instead of developing custom Integrated Circuits (ICs) from scratch. Procurement of Commerical-Off-The-Shelf (COTS) components reduces system development time and cost significantly, enables easy integration of new technologies, and facilitates smaller production runs. Moreover, since various companies use the same COTS IC, they are generally available in the market for a long period and are easy to replace. 

Although utilizing COTS parts can provide many benefits, it also introduces serious security concerns. None of the entities in the COTS IC supply chain are trusted from a consumer's perspective, leading to a ”Zero Trust” supply chain threat model. Any of these entities could introduce hidden malicious circuits or hardware Trojans within the component that could help an attacker in the field extract secret information (e.g., cryptographic keys) or cause a functional failure. Existing solutions to counter hardware Trojans are inapplicable in a zero trust scenario as they assume either the design house or the foundry to be trusted. Moreover, many solutions require access to the design for analysis or modification to enable the countermeasure. 

In this work, we have proposed a software-oriented countermeasure to ensure the confidentiality of secret assets against hardware Trojan attacks in untrusted COTS microprocessors. The proposed solution does not require any supply chain entity to be trusted and does not require analysis or modification of the IC design.  

To protect secret assets in an untrusted microprocessor, the proposed method leverages the concept of residue number coding to transform the software functions operating on the asset to be homomorphic. We have presented a detailed security analysis to evaluate the confidentiality of a secret asset under Trojan attacks using the secret key of the Advanced Encryption Standard (AES) program as a case study. Finally, to help streamline the application of this protection scheme, we have developed a plugin for the LLVM compiler toolchain that integrates the solution without requiring extensive source code alterations.


Madhuvanthi Mohan Vijayamala

Camouflaged Object Detection in Images using a Search-Identification based framework

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson (Co-Chair)
Zijun Yao


Abstract

While identifying an object in an image is almost an instantaneous task for the human visual perception system, it takes more effort and time to process and identify a camouflaged object - an entity that flawlessly blends with the background in the image. This explains why it is much more challenging to enable a machine learning model to do the same, in comparison to generic object detection or salient object detection.

This project implements a framework called Search Identification Network, that simulates the search and identification pattern adopted by predators in hunting their prey and applies it to detect camouflaged objects. The efficiency of this framework in detecting polyps in medical image datasets is also measured.


Lumumba Harnett

Mismatched Processing for Radar Interference Cancellation

When & Where:


Nichols Hall, Room 129

Committee Members:

Shannon Blunt, Chair
Chrisopther Allen
Erik Perrins
James Stiles
Richard Hale

Abstract

Matched processing is fundamental filtering operation within radar signal processing to estimate scattering in the radar scene based on the transmit signal. Although matched processing maximizes the signal-to-noise ratio (SNR), the filtering operation is ineffective when interference is captured in the receive measurement. Adaptive interference mitigation combined with matched processing has proven to mitigate interference and estimate the radar scene. But, a known caveat of matched processing is the resulting sidelobes that may mask other scatterers. The sidelobes can be efficiently addressed by windowing but this approach also comes with limited suppression capabilities, loss in resolution, and loss in SNR. The recent emergence of mismatch processing has shown to optimally reduce sidelobes while maintaining nominal resolution and signal estimation performance. Throughout this work, re-iterative minimum-mean square error (RMMSE) adaptive and least-squares (LS) optimal mismatch processing are proposed for enhanced signal estimation in unison with adaptive interference mitigation for various radar applications including random pulse repetition interval (PRI) staggering pulse-Doppler radar, airborne ground moving target indication, and radar & communication spectrum sharing. Mismatch processing and adaptive interference cancellation each can be computationally complex for practical implementation. Sub-optimal RMMSE and LS approaches are also introduced to address computational limitations. The efficacy of these algorithms are presented using various high-fidelity Monte Carlo simulations and open-air experimental datasets. 


Naveed Mahmud

Towards Complete Emulation of Quantum Algorithms using High-Performance Reconfigurable Computing

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Prasad Kulkarni
Heechul Yun
Tyrone Duncan

Abstract

Quantum computing is a promising technology that can potentially demonstrate supremacy over classical computing in solving specific problems. At present, two critical challenges for quantum computing are quantum state decoherence, and low scalability of current quantum devices. Decoherence places constraints on realistic applicability of quantum algorithms as real-life applications usually require complex equivalent quantum circuits to be realized. For example, encoding classical data on quantum computers for solving I/O and data-intensive applications generally requires quantum circuits that violate decoherence constraints. In addition, current quantum devices are of small-scale having low quantum bit(qubit) counts, and often producing inaccurate or noisy measurements, which also impacts the realistic applicability of real-world quantum algorithms. Consequently, benchmarking of existing quantum algorithms and investigation of new applications are heavily dependent on classical simulations that use costly, resource-intensive computing platforms. Hardware-based emulation has been alternatively proposed as a more cost-effective and power-efficient approach. This work proposes a hardware-based emulation methodology for quantum algorithms, using cost-effective Field-Programmable Gate-Array(FPGA) technology. The proposed methodology consists of three components that are required for complete emulation of quantum algorithms; the first component models classical-to-quantum(C2Q) data encoding, the second emulates the behavior of quantum algorithms, and the third models the process of measuring the quantum state and extracting classical information, i.e., quantum-to-classical(Q2C) data decoding. The proposed emulation methodology is used to investigate and optimize methods for C2Q/Q2C data encoding/decoding, as well as several important quantum algorithms such as Quantum Fourier Transform(QFT), Quantum Haar Transform(QHT), and Quantum Grover’s Search(QGS). This work delivers contributions in terms of reducing complexities of quantum circuits, extending and optimizing quantum algorithms, and developing new quantum applications. For higher emulation performance and scalability of the framework, hardware design techniques and hardware architectural optimizations are investigated and proposed. The emulation architectures are designed and implemented on a high-performance-reconfigurable-computer(HPRC), and proposed quantum circuits are implemented on a state-of-the-art quantum processor. Experimental results show that the proposed hardware architectures enable emulation of quantum algorithms with higher scalability, higher accuracy, and higher throughput, compared to existing hardware-based emulators. As a case study, quantum image processing using multi-spectral images is considered for the experimental evaluations. 


Eric Seals

Memory Bandwidth Dynamic Regulation and Throttling

When & Where:


Learned Hall, Room 3150

Committee Members:

Heechul Yun, Chair
Alex Bardas
Drew Davidson


Abstract

Multi-core, integrated CPU-GPU embedded systems provide new capabilities for sophisticated real-time systems with size, weight, and power limitations; however, interference between shared resources remains a challenge in providing necessary performance guarantees. The shared main memory is a notable system bottleneck - causing throughput slowdowns and timing unpredictability.

In this paper, we propose a full system mechanism which can provide memory bandwidth regulation across both CPU and the GPU complexes. This system monitors the memory controller accesses directly through hardware statistics counters, performs memory regulation at the software level for real-time CPU tasks, and incorporates a feedback-based throttling mechanism for non-critical GPU kernels using hardware within the NVIDIA Tegra X1 memory controller subsystem. The system is built as a loadable Linux kernel module that extends the MemGuard tool. We show that this system can make CPU task execution more predictable against co-running, memory intensive interference on either CPU or GPU.