Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Naveed Mahmud

Towards Complete Emulation of Quantum Algorithms using High-Performance Reconfigurable Computing

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Prasad Kulkarni
Heechul Yun
Tyrone Duncan

Abstract

Quantum computing is a promising technology that can potentially demonstrate supremacy over classical computing in solving specific problems. At present, two critical challenges for quantum computing are quantum state decoherence, and low scalability of current quantum devices. Decoherence places constraints on realistic applicability of quantum algorithms as real-life applications usually require complex equivalent quantum circuits to be realized. For example, encoding classical data on quantum computers for solving I/O and data-intensive applications generally requires quantum circuits that violate decoherence constraints. In addition, current quantum devices are of small-scale having low quantum bit(qubit) counts, and often producing inaccurate or noisy measurements, which also impacts the realistic applicability of real-world quantum algorithms. Consequently, benchmarking of existing quantum algorithms and investigation of new applications are heavily dependent on classical simulations that use costly, resource-intensive computing platforms. Hardware-based emulation has been alternatively proposed as a more cost-effective and power-efficient approach. This work proposes a hardware-based emulation methodology for quantum algorithms, using cost-effective Field-Programmable Gate-Array(FPGA) technology. The proposed methodology consists of three components that are required for complete emulation of quantum algorithms; the first component models classical-to-quantum(C2Q) data encoding, the second emulates the behavior of quantum algorithms, and the third models the process of measuring the quantum state and extracting classical information, i.e., quantum-to-classical(Q2C) data decoding. The proposed emulation methodology is used to investigate and optimize methods for C2Q/Q2C data encoding/decoding, as well as several important quantum algorithms such as Quantum Fourier Transform(QFT), Quantum Haar Transform(QHT), and Quantum Grover’s Search(QGS). This work delivers contributions in terms of reducing complexities of quantum circuits, extending and optimizing quantum algorithms, and developing new quantum applications. For higher emulation performance and scalability of the framework, hardware design techniques and hardware architectural optimizations are investigated and proposed. The emulation architectures are designed and implemented on a high-performance-reconfigurable-computer(HPRC), and proposed quantum circuits are implemented on a state-of-the-art quantum processor. Experimental results show that the proposed hardware architectures enable emulation of quantum algorithms with higher scalability, higher accuracy, and higher throughput, compared to existing hardware-based emulators. As a case study, quantum image processing using multi-spectral images is considered for the experimental evaluations. 


Cecelia Horan

Open-Source Intelligence Investigations: Development and Application of Efficient Tools

When & Where:


2001B Eaton Hall

Committee Members:

Hossein Saiedian, Chair
Drew Davidson
Fengjun Li


Abstract

Open-source intelligence is a branch within cybercrime investigation that focuses on information collection and aggregation. Through this aggregation, investigators and analysts can analyze the data for connections relevant to the investigation. There are many tools that assist with information collection and aggregation. However, these often require enterprise licensing. A solution to enterprise licensed tools is using open-source tools to collect information, often by scraping websites. These tools provide useful information, but they provide a large number of disjointed reports. The framework we developed automates information collection, aggregates these reports, and generates one single graphical report. By using a graphical report, the time required for analysis is also reduced. This framework can be used for different investigations. We performed a case study regarding the performance of the framework with missing person case information. It showed a significant improvement in the time required for information collection and report analysis. 


Ishrak Haye

Invernet: An Adversarial Attack Framework to Infer Downstream Context Distribution Through Word Embedding Inversion

When & Where:


Nichols Hall, Room 246

Committee Members:

Bo Luo, Chair
Zijun Yao, Co-Chair
Alex Bardas
Fengjun Li

Abstract

Word embedding has become a popular form of data representation that is used to train deep neural networks in many natural

language processing tasks, such as Machine Translation, Question Answer Generation, Named Entity Recognition, Next

Word/Sentence Prediction etc. With embedding, each word is represented as a dense vector which captures its semantic relationship

with other words and can better empower Machine Learning models to achieve state-of-the-art performance.

However, due to the memory and time intensive nature of learning such word embeddings, transfer learning has emerged as a

common practice to warm start the training process. As a result, an efficient way is to initialize with pretrained word vectors and then

fine-tune those on downstream domain specific smaller datasets. This study aims to find whether we can infer the contextual

distribution (i.e., how words cooccur in a sentence driven by syntactic regularities) of the downstream datasets given that we have

access to the embeddings from both pre-training and fine-tuning processes.

In this work, we propose a focused sampling method along with a novel model inversion architecture “Invernet” to invert word

embeddings into the word-to-word context information of the fine-tuned dataset. We consider the popular word2Vec models

including CBOW, SkipGram, and GloVe based algorithms with various unsupervised settings. We conduct extensive experimental

study on two real-world news datasets: Antonio Gulli’s News Dataset from Hugging Face repository and a New York Times dataset

from both quantitative and qualitative perspectives. Results show that “Invernet” has been able to achieve an average F1 score of 0.75

and an average AUC score of 0.85 in an attack scenario.

A concerning pattern from our experiments reveal that embedding models that are generally considered superior in different tasks

tend to be more vulnerable to model inversion. Our results suggest that a significant amount of context distribution information from

the downstream dataset can potentially leak if an attacker gets access to the pretrained and fine-tuned word embeddings. As a result,

attacks using “Invernet” can jeopardize the privacy of the users whose data might have been used to fine-tune the word embedding

model.


Sohaib Kiani

Designing Secure and Robust Machine Learning Models

When & Where:


Nichols Hall, Room 250, Gemini Room

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Cuncong Zhong
Xuemin Tu

Abstract

With the growing computational power and the enormous data available from many sectors, applications with machine learning (ML) components are widely adopted in our everyday lives. One major drawback associated with ML models is hard to guarantee same performance with changing environment. Since ML models are not traditional software that can be tested end-to-end. ML models are vulnerable against distributional shifts and cyber-attacks. Various cyber-attacks against deep neural networks (DNN) have been proposed in the literature, such as poisoning, evasion, backdoor, and model inversion. In the evasion attacks against DNN, the attacker generates adversarial instances that are visually indistinguishable from benign samples and sends them to the target DNN to trigger misclassifications.

In our work, we proposed a novel multi-view adversarial image detector, namely ‘Argos’, based on a novel observation. That is, there exist two” souls” in an adversarial instance, i.e., the visually unchanged content, which corresponds to the true label, and the added invisible perturbation, which corresponds to the misclassified label. Such inconsistencies could be further amplified through an autoregressive generative approach that generates images with seed pixels selected from the original image, a selected label, and pixel distributions learned from the training data. The generated images (i.e., the “views”) will deviate significantly from the original one if the label is adversarial, demonstrating inconsistencies that ‘Argos’ expects to detect. To this end, ‘Argos’ first amplifies the discrepancies between the visual content of an image and its misclassified label induced by the attack using a set of regeneration mechanisms and then identifies an image as adversarial if the reproduced views deviate to a preset degree. Our experimental results show that ‘Argos’ significantly outperforms two representative adversarial detectors in both detection accuracy and robustness against six well-known adversarial attacks.


Timothy Barclay

Proof-Producing Synthesis of CakeML from Coq

When & Where:


Nichols Hall, Room 246

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Matthew Moore
Eileen Nutting

Abstract

Coq's extraction plugin is used to produce code of a general purpose

  programming language from a specification written in the Calculus of Inductive

  Constructions (CIC). Currently, this mechanism is trusted, since there is no

  formal connection between the synthesized code and the CIC terms it originated

  from. This comes from a lack of formal specifications for the target

  languages: OCaml, Haskell, and Scheme. We intend to use the formally specified

  CakeML language as an extraction target, and generate a theorem in Coq that

  relates the generated CakeML abstract syntax to the CIC terms it is generated

  from. This work expands on the techniques used in the HOL4 translator from

  Higher Order Logic to CakeML. The HOL4 translator also allows for the

  generation of stateful code from the state and exception monad. We expand on

  their techniques by extracting terms with dependent types, and generating

  stateful code for other kinds of monads, like the reader monad, depending on

  what kind of computation the monad intends to represent.


Grant Jurgensen

A Verified Architecture for Trustworthy Remote Attestation

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Perry Alexander, Chair
Drew Davidson
Matthew Moore


Abstract

Remote attestation is a process where one digital system gathers and provides evidence of its state and identity to an external system. For this process to be successful, the external system must find the evidence convincingly trustworthy within that context. Remote attestation is difficult to make trustworthy due to the external system’s limited access to the attestation target. In contrast to local attestation, the appraising system is unable to directly observe and oversee the attestation target. In this work, we present a system architecture design and prototype implementation that we claim enables trustworthy remote attestation. Furthermore, we formally model the system within a temporal logic embedded in the Coq theorem prover and present key theorems that strengthen this trust argument.


Kaidong Li

Accurate and Robust Object Detection and Classification Based on Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Cuncong Zhong, Chair
Taejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao

Abstract

Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes.

My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification, where the internal constrained optimization mechanism can effectively defend adversarial attacks through implicit gradients.


Christian Daniel

Dynamic Metasurface Grouping for IRS Optimization in Massive MIMO Communications

When & Where:


246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Taejoon Kim, Co-Chair
Morteza Hashemi


Abstract

Intelligent Reflecting Surfaces (IRSs) grant the ability to control what was once considered the uncontrollable part of wireless communications, the channel. These smart signal mirrors show promise to significantly improve the effective signal-to-noise-ratio (SNR) of cell-users when the line-of-sight (LOS) channel between the base station (BS) and user is blocked. IRSs use implementable optimized phase shifts that beamform a reflected signal around channel blockages, and because they are passive devices, they have the benefit of having low cost and low power consumption. Previous works have concluded that IRSs need several hundred elements to outperform relays. Unfortunately, overhead and complexity costs related to optimizing these devices limit their scope to single-input single-output (SISO) systems. With multiple-input multiple-output (MIMO) and Massive MIMO becoming crucial components to modern 5G and beyond networks, a way to mitigate these overhead costs and integrate IRS technology with the promising MIMO techniques is paramount for these devices to have a place within modern cell technologies. This thesis proposes an IRS element grouping scheme that greatly reduces the number of unique IRS phases that need to be calculated and sent to the IRS controller via the limited rate feedback channel and allows for the ideal number of groups to be obtained at the BS before data transmission. Three methods are proposed to design the phase shifts and element partitioning within our scheme to improve effective SNR in an IRS-aided system. In our simulations, it is shown that our best performing method is one that dynamically partitions the IRS elements into non- uniform groups based on information gathered from the reflected channel and then optimizes its phase shifts. This method successfully handles the overhead trade-off problem, and shows significant achievable rate improvement from previous works.


Theresa Moore

Array Manifold Calibration for Multichannel SAR Sounders

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

James Stiles, Chair
Shannon Blunt
Carl Leuschen
John Paden
Leigh Stearns

Abstract

Multichannel synthetic aperture radar (SAR) ice sounders rely on parametric angle estimators in tomography to resolve elevation angle beyond the Rayleigh resolution limit of their cross-track arrays. The potential super resolution capability of these techniques is predicated on perfect knowledge of the array’s response to directional sources, referred to as the array manifold. Array manifold calibration improves angle estimator performance by reducing the mismatch between the model of the array’s transfer function and truth; its study straddles the fields of both signal processing and antenna theory, yet associated literature reveals dichotomous methodologies that perpetuate fragmented interpretations of the manifold calibration problem. This dissertation addresses calibration for SAR ice sounders that three dimensionally image ice sheet and glacier beds with tomographic techniques. The approach is rooted in array signal processing first but seeks a more unifying perspective of the manifold calibration problem by leveraging commercial computational electromagnetics software to understand error mechanisms and algorithm performance with a deterministic model of an electromagnetic manifold. The research outlined here proposes creation of large snapshot databases that aid in identifying calibration targets in SAR pixels with known arrival angles. The signal processing methodology taxonomizes manifold calibration into parametric and nonparametric forms and advances both in the context of SAR sounders. A parametric estimator of nonlinear manifold parameters that are common across disjoint sets is derived. The algorithm framework capitalizes on a snapshot database to aggregate many angularly diverse observations in estimating unknown model parameters. The technique, which handles multitarget calibration, is desirable in the SAR sounder problem but requires a parametric model of the angle-dependent manifold. Nonparametric calibration techniques characterize the array response over the field of view but require many observations of single sources over dense calibration grids. A subspace clustering technique is proposed to identify snapshots with a single dominant source, thereby enabling a principal components-based characterization of the sounder manifold. The measured manifold leads to significant performance improvements over the traditional array response model in tomography. These results indicate that manifold calibration will reduce uncertainty in sounder-derived maps of the subsurface, leading to more accurate estimates of total fresh ice volume.


Shravan Kaundinya

Investigative Development of an UWB radar for UAS-borne applications

When & Where:


Nichols Hall, Room 317

Committee Members:

Carl Leuschen, Chair
Christopher Allen
Fernando Rodriguez-Morales
Emily Arnold

Abstract

Over the last few years, one of the primary focuses in engineering development has been system packaging and miniaturization. This is apparent in various areas such as the rise of Internet of Things (IoT), CubeSats, and Unmanned Aerial Systems (UAS). The simultaneous miniaturization in multiple industries has enabled advancements in remote sensing instrument development. Sensors such as radars, lidars, and cameras are used on UAS to characterize various aspects of the Earth System like ice, soil, and vegetation, thereby improving our understanding. In this work, an Ultra-wideband (UWB) radar system design for the Vapor 55 UAS rotorcraft is investigated. A compact, lightweight 2 – 18 GHz Frequency Modulated Continuous Wave (FMCW) radar with two channels on transmit and receive is designed to characterize extended targets like soil and snow. This thesis reports initial proof-of-concept field measurements performed with soil as the target to identify backscatter signatures that are indicative of moisture content. The thesis also describes the exploratory design, development, and laboratory test results of the miniaturized radar electronics and compact antenna front-end.