Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Michael Bechtel

Shared Resource Denial-of-Service Attacks on Multicore Platforms

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Heechul Yun, Chair
Mohammad Alian
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri

Abstract

With the increased adoption of complex machine learning algorithms across many different fields, powerful computing platforms have become necessary to meet their computational needs. Multicore platforms are a popular choice as they provide greater computing capabilities and can still meet different size, weight, and power (SWaP) constraints. However, contention for shared hardware resources between multiple cores remains a significant challenge that can lead to interference and unpredictable timing behaviors. Furthermore, this contention can be intentionally induced by malicious actors with the specific goals of delaying safety-critical tasks and jeopardizing system safety. This is done by performing Denial-of-Service (DoS) attacks that target shared resources such that the other cores in a system are unable to access them. When done properly, these shared resource DoS attacks can significantly impact performance and threaten system stability. For example, DoS attacks can cause >300X slowdown on the popular Raspberry Pi 3 embedded platform.

Motivated by the inherent risks posed by these DoS attacks, this dissertation presents investigations and evaluations of shared resource contention on multicore platforms, and the impacts it can have on the performance of real-time tasks. We propose various DoS attacks that each target different shared resources in the memory hierarchy with the goal of causing as much slowdown as possible. We show that each attack can inflict significant temporal slowdowns to victim tasks on target platforms by exploiting different hardware and software mechanisms. We then develop and analyze techniques for providing shared resource isolation and temporal performance guarantees for safety-critical tasks running on multicore platforms. In particular, we find that bandwidth throttling mechanisms are effective solutions against most DoS attacks and can protect the performance of real-time victim tasks.


Sarah Johnson

Formal Analysis of TPM Key Certification Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Michael Branicky
Emily Witt


Abstract

Development and deployment of trusted systems often require definitive identification of devices. A remote entity should have confidence that a device is as it claims to be. An ideal method for fulfulling this need is through the use of secure device identitifiers. A secure device identifier (DevID) is defined as an identifier that is cryptographically bound to a device. A DevID must not be transferable from one device to another as that would allow distinct devices to be identified as the same. Since the Trusted Platform Module (TPM) is a secure Root of Trust for Storage, it provides the necessary protections for storing these identifiers. Consequently, the Trusted Computing Group (TCG) recommends the use of TPM keys for DevIDs. The TCG's specification TPM 2.0 Keys for Device Identity and Attestation describes several methods for remotely proving a key to be resident in a specific device's TPM. These methods are carefully constructed protocols which are intended to be performed by a trusted Certificate Authority (CA) in communication with a certificate-requesting device. DevID certificates produced by an OEM's CA at device manufacturing time may be used to provide definitive evidence to a remote entity that a key belongs to a specific device. Whereas DevID certificates produced by an Owner/Administrator's CA require a chain of certificates in order to verify a chain of trust to an OEM-provided root certificate. This distinction is due to the differences in the respective protocols prescribed by the TCG's specification. We aim to abstractly model these protocols and formally verify that their resulting assurances on TPM-residency do in fact hold. We choose this goal since the TCG themselves do not provide any proofs or clear justifications for how the protocols might provide these assurances. The resulting TPM-command library and execution relation modeled in Coq may easily be expanded upon to become useful in verifying a wide range of properties regarding DevIDs and TPMs.


Andrew Cousino

Recording Remote Attestations on the Blockchain

When & Where:


Nichols Hall, Gemini Room

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson


Abstract

Remote attestation is a process of establishing trust between various systems on a network. Until now, attestations had to be done on the fly as caching attestations had not yet been solved. With the blockchain providing a monotonic record, this work attempts to enable attestations to be cached. This paves the way for more complex attestation protocols to fit the wide variety of needs of users. We also developed specifications for these records to be cached on the blockchain.


Ragib Shakil Rafi

Nonlinearity Assisted Mie Scattering from Nanoparticles

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Alessandro Salandrino, Chair
Shima Fardad
Morteza Hashemi
Rongqing Hui
Judy Wu

Abstract

Scattering by nanoparticles is an exciting branch of physics to control and manipulate light. More specifically, there have been fascinating developments regarding light scattering by sub-wavelength particles, including high-index dielectric and metal particles, for their applications in optical resonance phenomena, detecting the fluorescence of molecules, enhancing Raman scattering, transferring the energy to the higher order modes, sensing and photodetector technologies. It recently gained more attention due to its near-field effect at the nanoscale and achieving new insights and applications through space and time-varying parametric modulation and including nonlinear effects. When the particle size is comparable to or slightly bigger than the incident wavelength, Mie solutions to Maxwell's equations describe these electromagnetic scattering problems. The addition and excitation of nonlinear effects in these high-indexed sub-wavelength dielectric and plasmonic particles might improve the existing performance of the system or provide additional features directed toward unique applications. In this thesis, we study the Mie scattering from dielectric and plasmonic particles in the presence of nonlinear effects. For dielectrics, we present a numerical study of the linear and nonlinear diffraction and focusing properties of dielectric metasurfaces consisting of silicon microcylinder arrays resting on a silicon substrate. Upon diffraction, such structures lead to the formation of near-field intensity profiles reminiscent of photonic nanojets and propagate similarly. Our results indicate that the Kerr nonlinear effect enhances light concentration throughout the generated photonic jet with an increase in the intensity of about 20% compared to the linear regime for the power levels considered in this work. The transverse beamwidth remains subwavelength in all cases, and the nonlinear effect reduces the full width. In the future, we want to optimize the performance through parametric modification of the system and continue our study with plasmonic structures in time–varying scenarios. We hope that with appropriate parametric modulation, intermodal energy transfer is possible in such structures. We want to explore the nonlinear excitation to transfer energy in higher-order modes by exploiting different wave-mixing interactions in time-modulated scatterers.


Anna Fritz

Negotiating Remote Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Fengjun Li
Emily Witt

Abstract

During remote attestation, a relying party prompts a target to perform some stateful measurement which can be appraised to determine trust in the target's system. In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement. To solve this problem, we introduce negotiation: a framework that allows the target and relying party to mutually determine an attestation protocol that satisfies both the target's need to protect sensitive information and the relying party's desire for a comprehensive measurement. We designed and verified this negotiation procedure such that for all negotiations, we can provably produce an executable protocol that satisfies the targets privacy standards. With the remainder of this work, we aim to realize and instantiate protocol orderings ensuring negotiation produces a protocol sufficient for the relying party. All progress is towards our ultimate goal of producing a working, fully verified negotiation scheme which will be integrated into our current attestation framework for flexible, end-to-end attestations.


Paul Gomes

A framework for embedding hybrid term proximity score with standard TF-IDF to improve the performance of recipe retrieval system

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hongyang Sun


Abstract

Information retrieval system plays an important role in the modern era in retrieving relevant information from a large collection of data, such as documents, webpages, and other multimedia content. Having an information retrieval system in any domain allows users to collect relevant information. Unfortunately, navigating a modern-day recipe website presents the audience with numerous recipes in a colorful user interface but with very little capability to search and narrow down your content based on your specific interests. The goal of the project is to develop a search engine for recipes using standard TF-IDF weighting and to improve the performance of the standard IR by implementing term proximity. The approach used to calculate term proximity in this project is a hybrid approach, a combination of span-based and pair-based approaches. The project architecture includes a crawler, a database, an API, a service responsible for TF-IDF weighting and term proximity calculation, and a web application to present the search results. 


Anjali Pare

Exploring Errors in Binary-Level CFG Recovery

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Fengjun Li
Bo Luo


Abstract

The control-flow graph (CFG) is a graphical representation of the program and holds information that is critical to the correct application of many other program analysis, performance optimization, and software security algorithms and techniques. While CFG generation is an ordinary task for source-level tools, like the compiler, the loss of high-level program information makes accurate CFG recovery a challenging issue for binary-level software reverse engineering (SRE) tools. Earlier research has shown that while advanced SRE tools can precisely reconstruct most of the CFG for the programs, important gaps and inaccuracies remain that may hamper critical tasks, from vulnerability and malicious code detection to adequately securing software binaries.

In this paper, we study three reverse engineering tools - angr, radare2 and Ghidra and perform an in-depth analysis of control-flow graphs generated by these tools. We develop a unique methodology using manual analysis and automated scripting to understand and categorize the CFG errors over a large benchmark set. Of the several interesting observations revealed by this work, one that is particularly unexpected is that most errors in the reconstructed CFGs appear to not be intrinsic limitations of the binary-level algorithms, as currently believed, and may be simply eliminated by more robust implementations. We expect our work to lead to more accurate CFG reconstruction in SRE tools and improved precision for other algorithms that employ CFGs.


Kailani Jones

Security Operation Centers: Analyzing COVID-19's Work-from-Home Influence on Endpoint Management and Developing a Sociotechnical Metrics Framework

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

Security Operations Centers (SOCs) are central components of modern enterprise networks. Organizations in industry, government, and academia deploy SOCs to manage their networks, defend against cyber threats, and maintain regulatory compliance. For reporting, SOC leadership typically use metrics such as “number of security incidents”, “mean time to remediation/ticket closure”, and “risk analysis” to name a few. However, these commonly leveraged metrics may not necessarily reflect the effectiveness of a SOC and its supporting tools.

To better understand these environments, we employ ethnographic approaches (e.g., participant observation) and embed a graduate student (a.k.a., field worker) in a real-world SOC. As the field worker worked in-person, alongside SOC employees and recorded observations on technological tools, employees and culture, COVID-19's work-from-home (WFH) phenomena occurred. In response, this dissertation traces and analyzes the SOC's effort to adapt and reprioritize. By intersecting historical analysis (starting in the 1970s) and ethnographic field notes (analyzed 352 field notes across 1,000+ hours in a SOC over 34 months) whilst complementing with quantitative interviews (covering 7 other SOCs), we find additional causal forces that, for decades, have pushed SOC network management toward endpoints.

Although endpoint management is not a novel concept to SOCs, COVID-19's WFH phenomena highlighted the need for flexible, supportive, and customizable metrics. As such, we develop a sociotechnical metrics framework with these qualities in mind and limit the scope to a core SOC function: alert handling. With a similar ethnographic approach (participant observation paired with semi-structured interviews covering 15 SOC employees across 10 SOCs), we develop the framework's foundation by analyzing and capturing the alert handling process (a.k.a., alert triage). This process demonstrates the significance of not only technical expertise (e.g., data exfiltration, command and control, etc.) but also the social characteristics (e.g., collaboration, communication, etc.). In fact, we point out the underlying presence and importance of expert judgment during alert triaging particularly during conclusion development.

In addition to the aforementioned qualities, our alert handling sociotechnical metrics framework aims to capture current gaps during the alert triage process that, if improved, could help SOC employees' effectiveness. With the focus upon this process and the uncovered limitations SOCs usually face today during alert handling, we validate not only this flexibility of our framework but also the accuracy in a real-world SOC


Gordon Ariho

MULTIPASS SAR PROCESSING FOR ICE SHEET VERTICAL VELOCITY AND TOMOGRAPHY MEASUREMENTS

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

James Stiles, Chair
John Paden (Co-Chair)
Christopher Allen
Shannon Blunt
Emily Arnold

Abstract

Vertical velocity is the rate at which ice moves vertically within an ice sheet, usually measured in meters per year. This movement can occur due to various factors, including accumulation, ice deformation, basal sliding, and subglacial melting. The measurement of vertical velocities within the ice sheet can assist in determining the age of the ice and assessing the rheology of the ice, thereby mitigating uncertainties due to analytical approximations of ice flow models.

We apply differential interferometric synthetic aperture radar (DInSAR) techniques to data from the Multichannel Coherent Radar Depth Sounder (MCoRDS) to measure the vertical displacement of englacial layers within an ice sheet. DInSAR’s accuracy is usually on the order of a small fraction of the wavelength (e.g., millimeter to centimeter precision is typical) in monitoring displacement along the radar line of sight (LOS). Ground-based Autonomous phase-sensitive Radio-Echo Sounder (ApRES) units have demonstrated the ability to precisely measure the relative vertical velocity by taking multiple measurements from the same location on the ice. Airborne systems can make a similar measurement but can suffer from spatial baseline errors since it is generally impossible to fly over the same stretch of ice on each pass with enough precision to ignore the spatial baseline. In this work, we compensate for spatial baseline errors using precise trajectory information and estimates of the cross-track layer slope using direction of arrival estimation. The current DInSAR algorithm is applied to airborne radar depth sounder data to produce results for flights near Summit camp and the EGIG (Expéditions Glaciologiques Internationales au Groenland) line in Greenland using the CReSIS toolbox. The current approach estimates the baseline error in multiple steps. Each step has dependencies on all the values to be estimated. To overcome this drawback, we have implemented a maximum likelihood estimator that jointly estimates the vertical velocity, the cross-track internal layer slope, and the unknown baseline error due to GPS and INS (Inertial Navigation System) errors. We incorporate the Lliboutry parametric model for vertical velocity into the maximum likelihood estimator framework.

To improve the direction of arrival estimation, we explore the use of focusing matrices against other wideband direction of arrival methods, such as wideband MLE, wideband MUSIC, and wideband MVDR, by comparing the mean squared error of the DOA estimates.

 


Dalton Brucker-Hahn

Mishaps in Microservices: Improving Microservice Architecture Security Through Novel Service Mesh Capabilities

When & Where:


Nichols Hall, Room 129, Ron Evans Apollo Auditorium

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Huazhen Fang

Abstract

Shifting trends in modern software engineering and cloud computing have pushed system designs to leverage containerization and develop their systems into microservice architectures. While microservice architectures emphasize scalability and ease-of-development, the issue of microservice explosion has emerged, stressing hosting environments and generating new challenges within this domain.  Service meshes, the latest in a series of developments, are being adopted to meet these needs. Service meshes provide separation of concerns between microservice development and the operational concerns of microservice deployments, such as service discovery and networking. However, despite the benefits provided by service meshes, the security demands of this domain are unmet by the current state-of-art offerings.

 

Through a series of experimental trials in a service mesh testbed, we demonstrate a need for improved security mechanisms in the state-of-art offerings of service meshes.  After deriving a series of domain-conscious recommendations to improve the longevity and flexibility of service meshes, we design and implement our proof-of-concept service mesh system ServiceWatch.  By leveraging a novel verification-in-the-loop scheme, we provide the capability for service meshes to provide holistic monitoring and management of the microservice deployments they host. Further, through frequent, automated rotations of security artifacts (keys, certificates, and tokens), we allow the service mesh to automatically isolate and remove microservices that violate the defined network policies of the service mesh, requiring no system administrator intervention. Extending this proof-of-concept environment, we design and implement a prototype workflow called CloudCoverCloudCover incorporates our verification-in-the-loop scheme and leverages existing tools, allowing easy adoption of these novel security mechanisms into modern systems.  Under a realistic and relevant threat model, we show how our design choices and improvements are both necessary and beneficial to real-world deployments. By examining network packet captures, we provide a theoretical analysis of the scalability of these solutions in real-world networks.  We further extend these trials experimentally using an independently managed and operated cloud environment to demonstrate the practical scalability of our proposed designs to large-scale software systems. Our results indicate that the overhead introduced by ServiceWatch and CloudCover are acceptable for real-world deployments. Additionally, the security capabilities provided effectively mitigate threats present within these environments.