Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Hara Madhav Talasila

Radiometric Calibration of Radar Depth Sounder Data Products

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Christopher Allen
James Stiles
Jilu Li
Leigh Stearns

Abstract

Although the Center for Remote Sensing of Ice Sheets (CReSIS) performs several radar calibration steps to produce Operation IceBridge (OIB) radar depth sounder data products, these datasets are not radiometrically calibrated and the swath array processing uses ideal (rather than measured [calibrated]) steering vectors. Any errors in the steering vectors, which describe the response of the radar as a function of arrival angle, will lead to errors in positioning and backscatter that subsequently affect estimates of basal conditions, ice thickness, and radar attenuation. Scientific applications that estimate physical characteristics of surface and subsurface targets from the backscatter are limited with the current data because it is not absolutely calibrated. Moreover, changes in instrument hardware and processing methods for OIB over the last decade affect the quality of inter-seasonal comparisons. Recent methods which interpret basal conditions and calculate radar attenuation using CReSIS OIB 2D radar depth sounder echograms are forced to use relative scattering power, rather than absolute methods.

As an active target calibration is not possible for past field seasons, a method that uses natural targets will be developed. Unsaturated natural target returns from smooth sea-ice leads or lakes are imaged in many datasets and have known scattering responses. The proposed method forms a system of linear equations with the recorded scattering signatures from these known targets, scattering signatures from crossing flight paths, and the radiometric correction terms. A least squares solution to optimize the radiometric correction terms is calculated, which minimizes the error function representing the mismatch in expected and measured scattering. The new correction terms will be used to correct the remaining mission data. The radar depth sounder data from all OIB campaigns can be reprocessed to produce absolutely calibrated echograms for the Arctic and Antarctic. A software simulator will be developed to study calibration errors and verify the calibration software. The software for processing natural targets and crossovers will be made available in CReSIS’s open-source polar radar software toolbox. The OIB data will be reprocessed with new calibration terms, providing to the data user community a complete set of radiometrically calibrated radar echograms for the CReSIS OIB radar depth sounder for the first time.


Daniel Herr

Information Theoretic Waveform Design with Application to Physically Realizable Adaptive-on-Transmit Radar

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Committee Members:

James Stiles, Chair
Christopher Allen
Carl Leuschen
Chris Depcik

Abstract

The fundamental task of a radar system is to utilize the electromagnetic spectrum to sense a scattering environment and generate some estimate from this measurement. This task can be posed as a Bayesian estimation problem of random parameters (the scattering environment) through an imperfect sensor (the radar system). From this viewpoint, metrics such as error covariance and estimator precision (or information) can be leveraged to evaluate and improve the performance of radar systems. Here, physically realizable radar waveforms are designed to maximize the Fisher information (FI) (specifically, a derivative of FI known as marginal Fisher information (MFI)) extracted from a scattering environment thereby minimizing the expected error covariance about an estimation parameter space. This information theoretic framework, along with the high-degree of design flexibility afforded by fully digital transmitter and receiver architectures, creates a high-dimensionality design space for optimizing radar performance.

First, the problem of joint-domain range-Doppler estimation utilizing a pulse-agile radar is posed from an estimation theoretic framework, and the minimum mean square error (MMSE) estimator is shown to suppress the range-sidelobe modulation (RSM) induced by pulse agility which may improve the signal-to-interference-plus-noise ratio (SINR) in signal-limited scenarios. A computationally efficient implementation of the range-Doppler MMSE estimator is developed as a series of range-profile estimation problems, under specific modeling and statistical assumptions. Next, a transformation of the estimation parameterization is introduced which ameliorates the high noise-gain typically associated with traditional MMSE estimation by sacrificing the super-resolution achieved by the MMSE estimator. Then, coordinate descent and gradient descent optimization methods are developed for designing MFI optimal waveforms with respect to either the original or transformed estimation space. These MFI optimal waveforms are extended to provide pulse-agility, which produces high-dimensionality radar emissions amenable to non-traditional receive processing techniques (such as MMSE estimation). Finally, informationally optimal waveform design and optimal estimation are extended into a cognitive radar concept capable of adaptive and dynamic sensing. The efficacy of the MFI waveform design and MMSE estimation are demonstrated via open-air hardware experimentation where their performance is compared against traditional techniques


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to distinct and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. Conversely, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar methodologies that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of the Space-Frequency Template Error (SFTE) is applied to a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on the SFTE for narrowband arrays. Finally, a suboptimal solution, based on alternating projections, is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars employing pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. The proposed waveforms and filters are implemented in hardware to demonstrate performance, validate robustness, and reflect real-world application to the degree possible with laboratory experimentation.


Anjana Lamsal

Self-homodyne Coherent Lidar System for Range and Velocity Detection

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Alessandro Salandrino
James Stiles


Abstract

Lidar systems are gaining popularity due to their benefits, including high resolution, precise accuracy and scalability. An FMCW lidar based on self-homodyne coherent detection technique is used for range and velocity measurement with a phase diverse coherent receiver. The system employs a self-homodyne detection technique, where a LO signal is derived directly from the same laser source as the transmitted signal and is the same linear chirp as the transmitted signal, thereby minimizing phase noise. A coherent receiver is employed to get in-phase and quadrature components of the photocurrent and to perform de-chirping. Since the LO has the same chirp as the transmitted signal, the mixing process in the photodiodes effectively cancels out the chirp or frequency modulation from the received signal. The spectrum of the de-chirped complex waveform is used to determine the range and velocity of the target. This lidar system simplifies the signal processing by using photodetectors for de-chirping. Additionally, after de-chirping, the resulting signal has a much narrower bandwidth compared to the original chirp signal and signal processing can be performed at lower frequencies.


Michael Neises

VERIAL: Verification-Enabled Runtime Integrity Attestation of Linux

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Drew Davidson
Cuncong Zhong
Matthew Moore
Michael Murray

Abstract

Runtime attestation is a way to gain confidence in the current state of a remote target. 
Layered attestation is a way of extending that confidence from one component to another. 
Introspective solutions for layered attestation require strict isolation. 
The seL4 is uniquely well-suited to offer kernel properties sufficient to achieve such isolation. 
I design, implement, and evaluate introspective measurements and the layered runtime attestation of a Linux kernel hosted by the seL4. 
VERIAL can detect diamorphine-style rootkits with performance cost comparable to previous work. 

Ibikunle Oluwanisola

Towards Generalizable Deep Learning Algorithms for Echogram Layer Tracking

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Shannon Blunt, Chair
Carl Leuschen
James Stiles
Christopher Depcik

Abstract

The accelerated melting of ice sheets in Greenland and Antarctica, driven by climate warming, is significantly contributing to global sea level rise. To better understand this phenomenon, airborne radars have been deployed to create echogram images that map snow accumulation patterns in these regions. Utilizing advanced radar systems developed by the Center for Remote Sensing and Integrated Systems (CReSIS), around 1.5 petabytes of climate data have been collected. However, extracting ice-related information, such as accumulation rates, remains limited due to the largely manual and time-consuming process of tracking internal layers in radar echograms. This highlights the need for automated solutions.

Machine learning and deep learning algorithms are well-suited for this task, given their near-human performance on optical images. The overlap between classical radar signal processing and machine learning techniques suggests that combining concepts from both fields could lead to optimized solutions.

In this work, we developed custom deep learning algorithms for automatic layer tracking (both supervised and self-supervised) to address the challenge of limited annotated data and achieve accurate tracking of radiostratigraphic layers in echograms. We introduce an iterative multi-class classification algorithm, termed “Row Block,” which sequentially tracks internal layers from the top to the bottom of an echogram based on the surface location. This approach was used in an active learning framework to expand the labeled dataset. We also developed deep learning segmentation algorithms by framing the echogram layer tracking problem as a binary segmentation task, followed by post-processing to generate vector-layer annotations using a connected-component 1-D layer-contour extractor.

Additionally, we aimed to provide the deep learning and scientific communities with a large, fully annotated dataset. This was achieved by synchronizing radar data with outputs from a regional climate model, creating what are currently the two largest machine-learning-ready Snow Radar datasets available, with 10,000 and 50,000 echograms, respectively.


Durga Venkata Suraj Tedla

AI DIETICIAN

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Jennifer Lohoefener


Abstract

The artificially intelligent Dietician Web application is an innovative piece of technology that makes use of artificial intelligence to offer individualised nutritional guidance and assistance. This web application uses advanced machine learning algorithms and natural language processing to provide users with individualized nutritional advice and assistance in meal planning. Users who are interested in improving their eating habits can benefit from this bot. The system collects relevant data about users' dietary choices, as well as information about calories, and provides insights into body mass index (BMI) and basal metabolic rate (BMR) through interactive conversations, resulting in tailored recommendations. To enhance its capacity for prediction, a number of classification methods, including naive Bayes, neural networks, random forests, and support vector machines, were utilised and evaluated. Following an exhaustive analysis, the model that proved to be the most effective random forest is selected for the purpose of incorporating it into the development of the artificial intelligence Dietician Web application. The purpose of this study is to emphasise the significance of the artificial intelligence Dietician Web application as a versatile and intelligent instrument that encourages the adoption of healthy eating habits and empowers users to make intelligent decisions regarding their dietary requirements.


Mohammed Atif Siddiqui

Understanding Soccer Through Data Science

When & Where:


Learned Hall, Room 2133

Committee Members:

Zijun Yao, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

Data science is revolutionizing the world of sports by uncovering hidden patterns and providing profound insights that enhance performance, strategy, and decision-making. This project, "Understanding Soccer Through Data Science," exemplifies the transformative power of data analytics in sports. By leveraging Graph Neural Networks (GNNs), this project delves deep into the intricate passing dynamics within soccer teams. 

A key innovation of this project is the development of a novel metric called PassNetScore, which aims to contextualize and provide meaningful insights into passing networks—a popular application of graph network theory in soccer. Utilizing the Statsbomb Event Data, which captures every event during a soccer match, including passes, shots, fouls, and substitutions, this project constructs detailed passing network graphs. Each player is represented as a node, and each pass as an edge, creating a comprehensive representation of team interactions on the pitch. The project harnesses the power of Spektral, a Python library for graph deep learning, to build and analyze these graphs. Key node features include players' average positions, total passes and expected threat of passes, while edges encapsulate the passing interactions and pass counts. 

The project explores two distinct models to calculate PassNetScore through predicting match outcomes. The first model is a basic GNN that employs a binary adjacency matrix to represent the presence or absence of passes between players. This model captures the fundamental structure of passing networks, highlighting key players and connections within the team. There are three variations of this model, each building on the binary model by adding new features to nodes or edges. The second model integrates GNN with Long Short-Term Memory (LSTM) networks to account for temporal dependencies in passing sequences. This advanced model provides deeper insights into how passing patterns evolve over time and how these dynamics impact match outcomes. To evaluate the effectiveness of these models, a suite of graph theory metrics is employed. These metrics illuminate the dynamics of team play and the influence of individual players, offering a comprehensive assessment of the PassNet Score metric. 

Through this innovative approach, the project demonstrates the powerful application of GNNs in sports analytics and offers a novel metric for evaluating passing networks based on match outcomes. This project paves the way for new strategies and insights that could revolutionize how teams analyze and improve their gameplay, showcasing the profound impact of data science in sports.

 


Amalu George

Enhancing the Robustness of Bloom Filters by Introducing Dynamicity

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Sumaiya Shomaji, Chair
Hongyang Sun
Han Wang


Abstract

A Bloom Filter (BF) is a compact and space-efficient data structure that efficiently handles membership queries on infinite streams with numerous unique items. They are probabilistic data structures and allow false positives to avail the compactness. While querying for an item’s membership in the structure, if it returns true, the item might or might not be present in the stream, but a false response guarantees the item's absence. Bloom filters are widely used in real-world applications such as networking, databases, web applications, email spam filtering, biometric systems, security, cloud computing, and distributed systems due to their space-efficient and time-efficient properties. Bloom filters offer several advantages, particularly in storage compression and time-efficient data lookup. Additionally, the use of hashing ensures data security, i.e., if the BF is accessed by an unauthorized entity, no enrolled data can be reversed or traced back to the original content. In summary, BFs are powerful structures for storing data in a storage-efficient approach with low time complexity and high security. However, a disadvantage of the traditional Bloom filters is, they do not support dynamic operations, such as adding or deleting elements. Therefore, in this project, the idea of a Dynamic Bloom Filter has been demonstrated that offers the dynamicity feature that allows the addition or deletion of items. By integrating dynamic capabilities into Standard Bloom filters, their functionality, and robustness are enhanced, making them more suitable for several applications. For example, in a perpetual inventory system, inventory records are constantly updated after every inventory-related transaction, such as sales, purchases, or returns. In banking, dynamic data changes throughout the course of transactions. In the healthcare domain, hospitals can dynamically update and delete patients' medical histories.


Asadullah Khan

A Triad of Approaches for PCB Component Segmentation and Classification using U-Net, SAM, and Detectron2

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

The segmentation and classification of Printed Circuit Board (PCB) components offer multifaceted applications- primarily design validation, assembly verification, quality control optimization, and enhanced recycling processes. However, this field of study presents numerous challenges, mainly stemming from the heterogeneity of PCB component morphology and dimensionality, variations in packaging methodologies for functionally equivalent components, and limitations in the availability of image data. 

This study proposes a triad of approaches consisting of two segmentation-based and a classification-based architecture for PCB component detection. The first segmentation approach introduces an enhanced U-Net architecture with a custom loss function for improved multi-scale classification and segmentation accuracy. The second segmentation method leverages transfer learning, utilizing the Segment Anything Model (SAM) developed by Meta’s FAIR lab for both segmentation and classification. Lastly, Detectron2 with a ResNeXt-101 backbone, enhanced by Feature Pyramid Network (FPN), Region Proposal Network (RPN), and Region of Interest (ROI) Align has been proposed for multi-scale detection. The proposed methods are implemented on the FPIC dataset to detect the most commonly appearing components (resistor, capacitor, integrated circuit, LED, and button) in PCB. The first method outperforms existing state-of-the-art networks without pre-training, achieving a DICE score of 94.05%, an IoU score of 91.17%, and an accuracy of 94.90%. On the other hand, the second one surpasses both the previous state-of-the-art network and U-net in segmentation, attaining a DICE score of 97.08%, an IoU score of 93.95%, and an accuracy of 96.34%. Finally, the third one, being the first transfer learning-based approach to perform individual component classification on PCBs, achieves an average precision of 89.88%. Thus, the proposed triad of approaches will play a promising role in enhancing the robustness and accuracy of PCB quality assurance techniques.