Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

KAIGE YAN

Power and Performance Co-optimization for Emerging Mobile Platforms

When & Where:


250 Nichols Hall

Committee Members:

Xin Fu, Chair
Prasad Kulkarni
Heechul Yun


Abstract

The mobile devices emerge as the most popular computing platform since 2011. Different from the traditional PC, the mobile devices are more power-constraint and performance-sensitive due to its size. In order to reduce the power consumption and improve the performance, we focus on the Last Level Cache (LLC), which is the power-hungry structure and critical to the performance in mobile platforms. In this project, we first integrate the McPAT power model into the Gem5 simulator. We also introduce the emerging memory technologies, such as Sprin-Transfer Torque RAM (STT-RAM) and embedded DRAM (eDRAM), into the cache design and compare their power and performance effectiveness with the conventional SRAM-based cache. Additionally, we identify that the frequent execution switch between the kernel and user code is the major reason for the high LLC miss in mobile applications. This is because blocks belonging to kernel and user space have severe interferences. We further propose static and dynamic way partition schemes to separate the cache blocks from kernel and user space. The experiment results show promising power reduction and performance improvement with our proposed techniques.


MICHAEL JANTZ

Exploring Dynamic Compilation and Cross-Layer Object Management Policies for Managed Language Applications

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Xin Fu
Andy Gill
Bo Luo
Karen Nordheden

Abstract

Recent years have witnessed the widespread adoption of managed programming languages that are designed to execute on virtual machines. Virtual machine architectures provide several powerful software engineering advantages over statically compiled binaries, such as portable program represenations, additional safety guarantees, and automatic memory and thread management, which have largely driven their success. To support and facilitate the use of these features, virtual machines implement a number of services that adaptively manage and optimize application behavior during execution. Such runtime services often require tradeoffs between efficiency and effectiveness, and different policies can have major implications on the system's performance and energy requirements. 

In this work, we extensively explore policies for the two runtime services that are most important for achieving performance and energy efficiency: dynamic (or Just-In-Time (JIT)) compilation and memory management. First, we examine the properties of single-tier and multi-tier JIT compilation policies in order to find strategies that realize the best program performance for existing and future machines. We perform hundreds of experiments with different compiler aggressiveness and optimization levels to evaluate the performance impact of varying if and when methods are compiled. Next, we investigate the issue of how to optimize program regions to maximize performance in JIT compilation environments. For this study, we conduct a thorough analysis of the behavior of optimization phases in our dynamic compiler, and construct a custom experimental framework to determine the performance limits of phase selection during dynamic compilation. Lastly, we explore innovative memory management strategies to improve energy efficiency in the memory subsystem. We propose and develop a novel cross-layer approach to memory management that integrates information and analysis in the VM with fine-grained management of memory resources in the operating system. Using custom as well as standard benchmark workloads, we perform detailed evaluation that demonstrates the energy-saving potential of our approach.


JINGWEIJIA TAN

Modeling and Improving the GPGPU Reliability in the Presence of Soft Errors

When & Where:


250 Nichols Hall

Committee Members:

Xin Fu, Chair
Prasad Kulkarni
Heechul Yun


Abstract

GPGPUs (general-purpose computing on graphics processing units) emerge as a highly attractive platform for HPC (high performance computing) applications due to its strong computing power. Unlike the graphic processing applications, HPC applications have rigorous requirement on execution correctness, which is generally ignored in the traditional GPU design. Soft Errors, which are failures caused by high-energy neutron or alpha particle strikes in integrated circuits, become a major reliability concern due to the shrinking of feature sizes and growing integration density. In this project, we first build a framework GPGPU-SODA to model the soft-error vulnerability of GPGPU microarchitecture using a publicly available simulator. Based on the framework, we identified the streaming processors are reliability hot-spot in GPGPUs. We further observe that the streaming processors are not fully utilized during the branch divergence and pipeline stalls caused by the long latency operations. We then propose a technique RISE to recycle the streaming processors idle time for soft-error detection in GPGPUs. Experimental results show that RISE obtains the good fault coverage with negligible performance degradation.


KARTHIK PODUVAL

HGS Schedulers for Digital Audio Workstation like Applications

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Victor Frost
Jim Miller


Abstract

Digital Audio Workstation (DAW) applications are real-time applications that have special timing constraints. HGS is a real-time scheduling framework that allows developers implement custom schedulers based on any scheduling algorithm through a process of direct interaction between client threads and their schedulers. Such scheduling could extend well beyond the common priority model that currently exists and could be a representation of arbitrary application semantics that can be well understood and acted upon by its associated scheduler. We like to term it "need based scheduling". In this thesis we firstly study some DAW implementations and later create a few different HGS schedulers aimed at assisting DAW applications meet their needs.


NEIZA TORRICO PANDO

High Precision Ultrasound Range Measurement System

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Swapan Chakrabarti
Ron Hui


Abstract

Real-time, precise range measurement between objects is useful for a variety of applications. The slow propagation of acoustic signals (330 m/s) in air makes the use of ultrasound frequencies an ideal approach to measure an accurate time of flight. The time of flight can then be used to calculate the range between two objects. The objective of this project is to achieve a precise range measurement within 10 cm uncertainty and an update rate of 30 ms for distances up to 10 m between unmanned aerial vehicles (UAVs) when flying in formation. Both transmitter and receiver are synchronized with a 1 pulse per second signal coming from a GPS. The time of flight is calculated using the cross-correlation of the transmitted and received waves. To allow for various users, a 40 kHz signal is phase modulated with Gold or Kasami codes.


CAMERON LEWIS

3D Imaging of Ice Sheets

When & Where:


317 Nichols Hall

Committee Members:

Prasad Gogineni, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale

Abstract

Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves affect both the mass balance of the ice sheet and the global climate system. This melting and refreezing influences the development of Antarctic Bottom Water, which help drive the oceanic thermohaline circulation, a critical component of the global climate system. Basal melt rates can be estimated through traditional glaciological techniques, relying on conversation of mass. However, this requires accurate knowledge of the ice movement, surface accumulation and ablation, and firn compression. Boreholes can provide direct measurement of melt rates, but only provide point estimates and are difficult and expensive to perform. Satellite altimetry measurements have been heavily relied upon for the past few decades. Thickness and melt rate estimates require the same conservation of mass a priori knowledge, with the additional assumption that the ice shelf is in hydrostatic equilibrium. Even with newly available, ground truthed density and geoid estimates, satellite data derived ice shelf thickness and melt rate estimates suffers from relatively course spatial resolution and interpolation induced error. Non destructive radio echo sounding (RES) measurements from long range airborne platforms provide best solution for fine spatial and temporal resolution over long survey traverses and only require a priori knowledge of firn density and surface accumulation. Previously, RES data derived basal melt rate experiments have been limited to ground based experiments with poor coverage and spatial resolution. To improve upon this, an airborne multi channel wideband radar has been developed for the purpose of imaging shallow ice and ice shelves. A moving platform and cross track antenna array will allow for fine resolution 3 D imaging of basal topography. An initial experiment will use a ground based system to image shallow ice and generate 3 D imagery as a proof of concept. This will then be applied to ice shelf data collected by an airborne system.


TRUC ANH NGUYEN

Transfer Control for Resilient End-to-End Transport

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Gary Minden


Abstract

Residing between the network layer and the application layer, the transport 
layer exchanges application data using the services provided by the network. Given the unreliable nature of the underlying network, reliable data transfer has become one of the key requirements for those transport-layer protocols such as TCP. Studying the various mechanisms developed for TCP to increase the correctness of data transmission while fully utilizing the network's bandwidth provides us a strong background for our study and development of our own resilient end-to-end transport protocol. Given this motivation, in this thesis, we study the dierent 
TCP's error control and congestion control techniques by simulating them under dierent network scenarios using ns-3. For error control, we narrow our research to acknowledgement methods such as cumulative ACK - the traditional TCP's way of ACKing, SACK, NAK, and SNACK. The congestion control analysis covers some TCP variants including Tahoe, Reno, NewReno, Vegas, Westwood, Westwood+, and TCP SACK.


CENK SAHIN

On Fundamental Performance Limits of Delay-Sensitive Wireless Communications

When & Where:


246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Victor Frost
Lingjia Liu
Zsolt Talata

Abstract

Mobile traffic is expected to grow at an annual compound rate of 66% in the next 3 years, while among the data types that account for this growth mobile video has the highest growth rate. Since most video applications are delay-sensitive, the delay-sensitive traffic will be the dominant traffic over future wireless communications. Consequently, future mobile wireless systems will face the dual challenge of supporting large traffic volume while providing reliable service for various kinds of delay-sensitive applications (e.g. real-time video, online gaming, and voice-over-IP (VoIP)). Past work on delay-sensitive communications has generally overlooked the physical-layer considerations such as modulation and coding scheme (MCS), probability of decoding error, and coding delay by employing oversimplified models for the physical-layer. With the proposed research we aim to bridge information theory, communication theory, and queueing theory by jointly considering the delay-violation probability and the probability of decoding error to identify the fundamental trade-offs among wireless system parameters such as channel fading speed, average received signal-to-noise ratio (SNR), MCS, and user perceived quality of service. We will model the underlying wireless channel by a finite-state Markov chain, use channnel dispersion to track the probability of decoding error and the coding delay for a given MCS, and focus on the asymptotic decay rate of buffer occupancy for queueing delay analysis. The proposed work will be used to obtain fundamental bounds on the performance of queued systems over wireless communication channels.


GHAITH SHABSIGH

LPI Performance of an Ad-Hoc Covert System Exploiting Wideband Wireless Mobile Networks

When & Where:


246 Nichols Hall

Committee Members:

Victor Frost, Chair
Chris Allen
Lingjia Liu
Erik Perrins
Tyrone Duncan

Abstract

The high level of functionality and flexibility of modern wideband wireless networks, LTE and WiMAX, have made them the preferred technology for providing mobile internet connectivity. The high performance of these systems comes from adopting several innovative techniques such as Orthogonal Frequency Division Multiplexing (OFDM), Automatic Modulation and Coding (AMC), and Hybrid Automatic Repeat Request (HARQ). However, this flexibility also opens the door for network exploitation by other ad-hoc networks, like Device-to-Device technology, or covert systems. In this work effort, we provide the theoretical foundation for a new ad-hoc wireless covert system that hides its transmission in the RF spectrum of an OFDM-based wideband network (Target Network), like LTE. The first part of this effort will focus on designing the covert waveform to achieve a low probability of detection (LPD). Next, we compare the performance of several available detection methods in detecting the covert transmission, and propose a detection algorithm that would represent a worst case scenario for the covert system. Finally, we optimize the performance of the covert system in terms of its throughput, transmission power, and interference on/from the target network.


MOHAMMED ALENAZI

Network Resilience Improvement and Evaluation Using Link Additions

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Lingjia Liu
Bo Luo
Tyrone Duncan

Abstract

Computer networks are prone to targeted attacks and natural disasters that could disrupt its normal operation and services. Adding links to form a full mesh yields the most resilient network but it incurs unfeasible high cost. In this research, we investigate the resilience improvement of real-world network via adding a cost-efficient set of links. Adding a set of link to get optimal solution using exhaustive search is impracticable given the size of communication network graphs. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks.