Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Jarrett Zeliff

An Analysis of Bluetooth Mesh Security Features in the Context of Secure Communications

When & Where:


Eaton Hall, Room 1

Committee Members:

Alexandru Bardas, Chair
Drew Davidson
Fengjun Li


Abstract

Significant developments in communication methods to help support at-risk populations have increased over the last 10 years. We view at-risk populations as a group of people present in environments where the use of infrastructure or electricity, including telecommunications, is censored and/or dangerous. Security features that accompany these communication mechanisms are essential to protect the confidentiality of its user base and the integrity and availability of the communication network.

In this work, we look at the feasibility of using Bluetooth Mesh as a communication network and analyze the security features that are inherent to the protocol. Through this analysis we determine the strengths and weaknesses of Bluetooth Mesh security features when used as a messaging medium for at risk populations and provide improvements to current shortcomings. Our analysis includes looking at the Bluetooth Mesh Networking Security Fundamentals as described by the Bluetooth Sig: Encryption and Authentication, Separation of Concerns, Area isolation, Key Refresh, Message Obfuscation, Replay Attack Protection, Trashcan Attack Protection, and Secure Device Provisioning.  We look at how each security feature is implemented and determine if these implementations are sufficient in protecting the users from various attack vectors. For example, we examined the Blue Mirror attack, a reflection attack during the provisioning process which leads to the compromise of network keys, while also assessing the under-researched key refresh mechanism. We propose a mechanism to address Blue-Mirror-oriented attacks with the goal of creating a more secure provisioning process.  To analyze the key refresh mechanism, we implemented our own full-fledged Bluetooth Mesh network and implemented a key refresh mechanism. Through this we form an assessment of the throughput, range, and impacts of a key refresh in both lab and field environments that demonstrate the suitability of our solution as a secure communication method.


Daniel Johnson

Probability-Aware Selective Protection for Sparse Iterative Solvers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Hongyang Sun, Chair
Perry Alexander
Zijun Yao


Abstract

With the increasing scale of high-performance computing (HPC) systems, transient bit-flip errors are now more likely than ever, posing a threat to long-running scientific applications. A substantial portion of these applications involve the simulation of partial differential equations (PDEs) modeling physical processes over discretized spatial and temporal domains, with some requiring the solving of sparse linear systems. While these applications are often paired with system-level application-agnostic resilience techniques such as checkpointing and replication, the utilization of these techniques imposes significant overhead. In this work, we present a probability-aware framework that produces low-overhead selective protection schemes for the widely used Preconditioned Conjugate Gradient (PCG) method, whose performance can heavily degrade due to error propagation through the sparse matrix-vector multiplication (SpMV) operation. Through the use of a straightforward mathematical model and an optimized machine learning model, our selective protection schemes incorporate error probability to protect only certain crucial operations. An experimental evaluation using 15 matrices from the SuiteSparse Matrix Collection demonstrates that our protection schemes effectively reduce resilience overheads, often outperforming or matching both baseline and established protection schemes across all error probabilities.


Javaria Ahmad

Discovering Privacy Compliance Issues in IoT Apps and Alexa Skills Using AI and Presenting a Mechanism for Enforcing Privacy Compliance

When & Where:


LEEP2, Room 2425

Committee Members:

Bo Luo, Chair
Alex Bardas
Tamzidul Hoque
Fengjun Li
Michael Zhuo Wang

Abstract

The growth of IoT and voice assistant (VA) apps poses increasing concerns about sensitive data leaks. While privacy policies are required to describe how these apps use private user data (i.e., data practice), problems such as missing, inaccurate, and inconsistent policies have been repeatedly reported. Therefore, it is important to assess the actual data practice in apps and identify the potential gaps between the actual and declared data usage. We find that app stores lack in regulating the compliance between the app practices and their declaration, so we use AI to discover the compliance issues in these apps to assist the regulators and developers. For VA apps, we also develop a mechanism to enforce the compliance using AI. In this work, we conduct a measurement study using our framework called IoTPrivComp, which applies an automated analysis of IoT apps’ code and privacy policies to identify compliance gaps. We collect 1,489 IoT apps with English privacy policies from the Play Store. IoTPrivComp detects 532 apps with sensitive external data flows, among which 408 (76.7%) apps have undisclosed data leaks. Moreover, 63.4% of the data flows that involve health and wellness data are inconsistent with the practices disclosed in the apps’ privacy policies. Next, we focus on the compliance issues in skills. VAs, such as Amazon Alexa, are integrated with numerous devices in homes and cars to process user requests using apps called skills. With their growing popularity, VAs also pose serious privacy concerns. Sensitive user data captured by VAs may be transmitted to third-party skills without users’ consent or knowledge about how their data is processed. Privacy policies are a standard medium to inform the users of the data practices performed by the skills. However, privacy policy compliance verification of such skills is challenging, since the source code is controlled by the skill developers, who can make arbitrary changes to the behaviors of the skill without being audited; hence, conventional defense mechanisms using static/dynamic code analysis can be easily escaped. We present Eunomia, the first real-time privacy compliance firewall for Alexa Skills. As the skills interact with the users, Eunomia monitors their actions by hijacking and examining the communications from the skills to the users, and validates them against the published privacy policies that are parsed using a BERT-based policy analysis module. When non-compliant skill behaviors are detected, Eunomia stops the interaction and warns the user. We evaluate Eunomia with 55,898 skills on Amazon skills store to demonstrate its effectiveness and to provide a privacy compliance landscape of Alexa skills.


Xiangyu Chen

Toward Efficient Deep Learning for Computer Vision Applications

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Fengjun Li
Hongguo Xu

Abstract

Deep learning leads the performance in many areas of computer vision. However, after a decade of research, it tends to require larger datasets and more complex models, leading to heightened resource consumption across all fronts. Regrettably, meeting these requirements proves challenging in many real-life scenarios. First, both data collection and labeling processes entail substantial labor and time investments. This challenge becomes especially pronounced in domains such as medicine, where identifying rare diseases demands meticulous data curation. Secondly, the large size of state-of-the-art models, such as ViT, Stable Diffusion, and ConvNext, hinders their deployment on resource-constrained platforms like mobile devices. Research indicates pervasive redundancies within current neural network structures, exacerbating the issue. Lastly, even with ample datasets and optimized models, the time required for training and inference remains prohibitive in certain contexts. Consequently, there is a burgeoning interest among researchers in exploring avenues for efficient artificial intelligence.

This study endeavors to delve into various facets of efficiency within computer vision, including data efficiency, model efficiency, as well as training and inference efficiency. The data efficiency is improved from the perspective of increasing information brought by given image inputs and reducing redundancies of RGB image formats. To achieve this, we propose to integrate both spatial and frequency representations to finetune the classifier. Additionally, we propose explicitly increasing the input information density in the frequency domain by deleting unimportant frequency channels. For model efficiency, we scrutinize the redundancies present in widely used vision transformers. Our investigation reveals that trivial attention in their attention modules covers useful non-trivial attention due to its large amount. We propose mitigating the impact of accumulated trivial attention weights. To increase training efficiency, we propose SuperLoRA, a generation of LoRA adapter, to fine-tune pretrained models with few iterations and extremely-low parameters. Finally, a model simplification pipeline is proposed to further reduce inference time on mobile devices. By addressing these challenges, we aim to advance the practicality and performance of computer vision systems in real-world applications.


Krushi Patel

Image Classification & Segmentation based on Enhanced CNN and Transformer Networks

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Fengjun Li, Chair
Prasad Kulkarni
Bo Luo
Cuncong Zhong
Xinmai Yang

Abstract

Convolutional Neural Networks (CNNs) have significantly enhanced performance across various computer vision tasks such as image recognition and segmentation, owing to their robust representation capabilities. To further boost CNN performance, a self-attention module is integrated after each network layer. Transformer-based models, which leverage a multi-head self-attention module as their core component, have recently demonstrated outstanding performance. However, several challenges persist, including the limitation to class-specific channels in CNNs, the constrained receptive field in local transformers, and the incorporation of redundant features and the absence of multi-scale features in U-Net type segmentation architectures.

In our study, we propose new strategies to tackle these challenges. (1) We propose a novel channel-based self-attention module to diversify the focus more on the discriminative and significant channels, and the module can be embedded at the end of any backbone network for image classification. (2) To mitigate noise introduced by shallow encoder layers in U-Net architectures, we substitute skip connections with an Adaptive Global Context Module (AGCM). Additionally, we introduce the Semantic Feature Enhancement Module (SFEM) to enhance multi-scale features in polyp segmentation. (3) We introduce a Multi-scaled Overlapped Attention (MOA) mechanism within local transformer-based networks for image classification, facilitating the establishment of long-range dependencies and initiation of neighborhood window communication. (4) We propose a pioneering Fuzzy Attention Module designed to prioritize challenging pixels, thereby augmenting polyp segmentation performance. (5) We develop a novel dense attention gate module that aggregates features from all preceding layers to compute attention scores, refining global features in polyp segmentation tasks. Moreover, we design a new multi-layer horizontally extended decoder architecture to enhance local feature refinement in polyp segmentation.


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to separate and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. As a strategical converse, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar systems that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of Space-Frequency Template Error (SFTE) is implemented on a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on SFTE for narrowband arrays. Finally, optimization via alternating projections is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars using pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived, and experimentally validated, that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. Several modifications to the demonstrated algorithms are proposed to refine implementation, enhance performance, and reflect real-world application to the degree that numerical simulations can.


Anna Fritz

A Formally Verified Infrastructure for Negotiating Remote Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Fengjun Li
Emily Witt

Abstract

Semantic remote attestation is the process of gathering and appraising evidence to establish trust in a remote system. Remote attestation occurs at the request of an appraiser or relying party and proceeds with a target system executing an attestation protocol that invokes attestation services in a specific order to generate and bundle evidence. An appraiser may then evaluate the generated evidence to establish trust in the target's state.  In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement operation. To solve this problem, we introduce Attestation Protocol Negotiation or the process of establishing a mutually agreed upon protocol that satisfies the relying party's desire for comprehensive information and the target's desire for constrained disclosure.

    This research explores the formal modeling and verification of negotiation, introducing refinement and selection procedures to enable communicating peers to achieve their goals. First, we explore the formalization of refinement or the process by which a target generates executable protocols. Here we focus on a definition of system specifications through manifests, protocol sufficiency and soundness, policy representation, and the negotiation structure. By using our formal models to represent and verify negotiation's properties we can statically determine that a provably sound, sufficient, and executable protocol is produced. Next, we present a formalized model for protocol selection, introducing and proving a preorder over Copland remote attestation protocols to facilitate selection of the most adversary-constrained protocol. With this modeling, we prove selected protocols increase the difficulty of an active adversary. By addressing the target's capability to generate provably executable protocols and the ability to order these protocols, this methodology has the potential to revolutionize the attestation protocol provisioning process.


Arjun Dhage Ramachandra

Implementing object Detection for Real-World Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Cuncong Zhong


Abstract

 The advent of deep learning has enabled the development of powerful AI models that are being used in fields such as medicine, surveillance monitoring, optimizing manufacturing processes, allowing robots to navigate their environment, chatbots, and much more. These applications are only made possible because of the enormous research in the fields of Neural networks and deep learning. In this paper, I’ll be discussing a branch of Neural Networks called Convolution Neural Network (CNN), and how they are used for object detection tasks for detecting and classifying objects in an image. I’ll also discuss a popular object detection framework called Single Shot Multibox Detector (SSD) and implement it in my web application project which allows users to detect objects in images and search for images based on the presence of objects. The main aim of the project was to allow easy access to perform detections with a few clicks. 


Kaidong Li

Accurate and Robust Object Detection and Classification Based on Deep Neural Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Taejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao

Abstract

Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes. Ultimately, the realms of 2D image classification and 3D point cloud classification represent distinct avenues of research, each relying on significantly different architectures. Given the unique characteristics of these data types, it is not feasible to employ models interchangeably between them.

My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To gap the domain difference in 3D and 2D classification and to increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification. We experimented with various 3D-to-2D mapping algorithm, bridging the gap between 2D and 3D classification. Furthermore, we empirically validate the internal constrained optimization mechanism effectively defend adversarial attacks through implicit gradients.


Andrew Mertz

Multiple Input Single Output (MISO) Receive Processing Techniques for Linear Frequency Modulated Continuous Wave Frequency Diverse Array (LFMCW-FDA) Transmit Structures

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Patrick McCormick, Chair
Chris Allen
Shannon Blunt
James Stiles

Abstract

This thesis focuses on the multiple processing techniques that can be applied to a single receive element co-located with a Frequency Diverse Array (FDA) transmission structure that illuminates a large volume to estimate the scattering characteristics of objects within the illuminated space in the range, Doppler, and spatial dimensions. FDA transmissions consist of a number of evenly spaced transmitting elements all of which are radiating a linear frequency modulated (LFM) waveform. The elements are configured into a Uniform Linear Array (ULA) and the waveform of each element is separated by a frequency spacing across the elements where the time duration of the chirp is inversely proportional to an integer multiple of the frequency spacing between elements. The complex transmission structure created by this arrangement of multiple transmitting elements can be received and processed by a single receive element. Furthermore, multiple receive processing techniques, each with their own advantages and disadvantages, can be applied to the data received from the single receive element to estimate the range, velocity, and spatial direction of targets in the illuminated volume relative to the co-located transmit array and receive element. Three different receive processing techniques that can be applied to FDA transmissions are explored. Two of these techniques are novel to this thesis, including the spatial matched filter processing technique for FDA transmission structures, and stretch processing using virtual array processing for FDA transmissions. Additionally, this thesis introduces a new type of FDA transmission structure referred to as ”slow-time” FDA.