Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Daniel Herr

Information Theoretic Waveform Design with Application to Physically Realizable Adaptive-on-Transmit Radar

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Committee Members:

James Stiles, Chair
Christopher Allen
Carl Leuschen
Chris Depcik

Abstract

The fundamental task of a radar system is to utilize the electromagnetic spectrum to sense a scattering environment and generate some estimate from this measurement. This task can be posed as a Bayesian estimation problem of random parameters (the scattering environment) through an imperfect sensor (the radar system). From this viewpoint, metrics such as error covariance and estimator precision (or information) can be leveraged to evaluate and improve the performance of radar systems. Here, physically realizable radar waveforms are designed to maximize the Fisher information (FI) (specifically, a derivative of FI known as marginal Fisher information (MFI)) extracted from a scattering environment thereby minimizing the expected error covariance about an estimation parameter space. This information theoretic framework, along with the high-degree of design flexibility afforded by fully digital transmitter and receiver architectures, creates a high-dimensionality design space for optimizing radar performance.

First, the problem of joint-domain range-Doppler estimation utilizing a pulse-agile radar is posed from an estimation theoretic framework, and the minimum mean square error (MMSE) estimator is shown to suppress the range-sidelobe modulation (RSM) induced by pulse agility which may improve the signal-to-interference-plus-noise ratio (SINR) in signal-limited scenarios. A computationally efficient implementation of the range-Doppler MMSE estimator is developed as a series of range-profile estimation problems, under specific modeling and statistical assumptions. Next, a transformation of the estimation parameterization is introduced which ameliorates the high noise-gain typically associated with traditional MMSE estimation by sacrificing the super-resolution achieved by the MMSE estimator. Then, coordinate descent and gradient descent optimization methods are developed for designing MFI optimal waveforms with respect to either the original or transformed estimation space. These MFI optimal waveforms are extended to provide pulse-agility, which produces high-dimensionality radar emissions amenable to non-traditional receive processing techniques (such as MMSE estimation). Finally, informationally optimal waveform design and optimal estimation are extended into a cognitive radar concept capable of adaptive and dynamic sensing. The efficacy of the MFI waveform design and MMSE estimation are demonstrated via open-air hardware experimentation where their performance is compared against traditional techniques


Past Defense Notices

Dates

Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to distinct and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. Conversely, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar methodologies that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of the Space-Frequency Template Error (SFTE) is applied to a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on the SFTE for narrowband arrays. Finally, a suboptimal solution, based on alternating projections, is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars employing pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. The proposed waveforms and filters are implemented in hardware to demonstrate performance, validate robustness, and reflect real-world application to the degree possible with laboratory experimentation.


Anjana Lamsal

Self-homodyne Coherent Lidar System for Range and Velocity Detection

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Alessandro Salandrino
James Stiles


Abstract

Lidar systems are gaining popularity due to their benefits, including high resolution, precise accuracy and scalability. An FMCW lidar based on self-homodyne coherent detection technique is used for range and velocity measurement with a phase diverse coherent receiver. The system employs a self-homodyne detection technique, where a LO signal is derived directly from the same laser source as the transmitted signal and is the same linear chirp as the transmitted signal, thereby minimizing phase noise. A coherent receiver is employed to get in-phase and quadrature components of the photocurrent and to perform de-chirping. Since the LO has the same chirp as the transmitted signal, the mixing process in the photodiodes effectively cancels out the chirp or frequency modulation from the received signal. The spectrum of the de-chirped complex waveform is used to determine the range and velocity of the target. This lidar system simplifies the signal processing by using photodetectors for de-chirping. Additionally, after de-chirping, the resulting signal has a much narrower bandwidth compared to the original chirp signal and signal processing can be performed at lower frequencies.


Michael Neises

VERIAL: Verification-Enabled Runtime Integrity Attestation of Linux

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Drew Davidson
Cuncong Zhong
Matthew Moore
Michael Murray

Abstract

Runtime attestation is a way to gain confidence in the current state of a remote target. 
Layered attestation is a way of extending that confidence from one component to another. 
Introspective solutions for layered attestation require strict isolation. 
The seL4 is uniquely well-suited to offer kernel properties sufficient to achieve such isolation. 
I design, implement, and evaluate introspective measurements and the layered runtime attestation of a Linux kernel hosted by the seL4. 
VERIAL can detect diamorphine-style rootkits with performance cost comparable to previous work. 

Ibikunle Oluwanisola

Towards Generalizable Deep Learning Algorithms for Echogram Layer Tracking

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Shannon Blunt, Chair
Carl Leuschen
James Stiles
Christopher Depcik

Abstract

The accelerated melting of ice sheets in Greenland and Antarctica, driven by climate warming, is significantly contributing to global sea level rise. To better understand this phenomenon, airborne radars have been deployed to create echogram images that map snow accumulation patterns in these regions. Utilizing advanced radar systems developed by the Center for Remote Sensing and Integrated Systems (CReSIS), around 1.5 petabytes of climate data have been collected. However, extracting ice-related information, such as accumulation rates, remains limited due to the largely manual and time-consuming process of tracking internal layers in radar echograms. This highlights the need for automated solutions.

Machine learning and deep learning algorithms are well-suited for this task, given their near-human performance on optical images. The overlap between classical radar signal processing and machine learning techniques suggests that combining concepts from both fields could lead to optimized solutions.

In this work, we developed custom deep learning algorithms for automatic layer tracking (both supervised and self-supervised) to address the challenge of limited annotated data and achieve accurate tracking of radiostratigraphic layers in echograms. We introduce an iterative multi-class classification algorithm, termed “Row Block,” which sequentially tracks internal layers from the top to the bottom of an echogram based on the surface location. This approach was used in an active learning framework to expand the labeled dataset. We also developed deep learning segmentation algorithms by framing the echogram layer tracking problem as a binary segmentation task, followed by post-processing to generate vector-layer annotations using a connected-component 1-D layer-contour extractor.

Additionally, we aimed to provide the deep learning and scientific communities with a large, fully annotated dataset. This was achieved by synchronizing radar data with outputs from a regional climate model, creating what are currently the two largest machine-learning-ready Snow Radar datasets available, with 10,000 and 50,000 echograms, respectively.


Durga Venkata Suraj Tedla

AI DIETICIAN

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Jennifer Lohoefener


Abstract

The artificially intelligent Dietician Web application is an innovative piece of technology that makes use of artificial intelligence to offer individualised nutritional guidance and assistance. This web application uses advanced machine learning algorithms and natural language processing to provide users with individualized nutritional advice and assistance in meal planning. Users who are interested in improving their eating habits can benefit from this bot. The system collects relevant data about users' dietary choices, as well as information about calories, and provides insights into body mass index (BMI) and basal metabolic rate (BMR) through interactive conversations, resulting in tailored recommendations. To enhance its capacity for prediction, a number of classification methods, including naive Bayes, neural networks, random forests, and support vector machines, were utilised and evaluated. Following an exhaustive analysis, the model that proved to be the most effective random forest is selected for the purpose of incorporating it into the development of the artificial intelligence Dietician Web application. The purpose of this study is to emphasise the significance of the artificial intelligence Dietician Web application as a versatile and intelligent instrument that encourages the adoption of healthy eating habits and empowers users to make intelligent decisions regarding their dietary requirements.


Mohammed Atif Siddiqui

Understanding Soccer Through Data Science

When & Where:


Learned Hall, Room 2133

Committee Members:

Zijun Yao, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

Data science is revolutionizing the world of sports by uncovering hidden patterns and providing profound insights that enhance performance, strategy, and decision-making. This project, "Understanding Soccer Through Data Science," exemplifies the transformative power of data analytics in sports. By leveraging Graph Neural Networks (GNNs), this project delves deep into the intricate passing dynamics within soccer teams. 

A key innovation of this project is the development of a novel metric called PassNetScore, which aims to contextualize and provide meaningful insights into passing networks—a popular application of graph network theory in soccer. Utilizing the Statsbomb Event Data, which captures every event during a soccer match, including passes, shots, fouls, and substitutions, this project constructs detailed passing network graphs. Each player is represented as a node, and each pass as an edge, creating a comprehensive representation of team interactions on the pitch. The project harnesses the power of Spektral, a Python library for graph deep learning, to build and analyze these graphs. Key node features include players' average positions, total passes and expected threat of passes, while edges encapsulate the passing interactions and pass counts. 

The project explores two distinct models to calculate PassNetScore through predicting match outcomes. The first model is a basic GNN that employs a binary adjacency matrix to represent the presence or absence of passes between players. This model captures the fundamental structure of passing networks, highlighting key players and connections within the team. There are three variations of this model, each building on the binary model by adding new features to nodes or edges. The second model integrates GNN with Long Short-Term Memory (LSTM) networks to account for temporal dependencies in passing sequences. This advanced model provides deeper insights into how passing patterns evolve over time and how these dynamics impact match outcomes. To evaluate the effectiveness of these models, a suite of graph theory metrics is employed. These metrics illuminate the dynamics of team play and the influence of individual players, offering a comprehensive assessment of the PassNet Score metric. 

Through this innovative approach, the project demonstrates the powerful application of GNNs in sports analytics and offers a novel metric for evaluating passing networks based on match outcomes. This project paves the way for new strategies and insights that could revolutionize how teams analyze and improve their gameplay, showcasing the profound impact of data science in sports.

 


Amalu George

Enhancing the Robustness of Bloom Filters by Introducing Dynamicity

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Sumaiya Shomaji, Chair
Hongyang Sun
Han Wang


Abstract

A Bloom Filter (BF) is a compact and space-efficient data structure that efficiently handles membership queries on infinite streams with numerous unique items. They are probabilistic data structures and allow false positives to avail the compactness. While querying for an item’s membership in the structure, if it returns true, the item might or might not be present in the stream, but a false response guarantees the item's absence. Bloom filters are widely used in real-world applications such as networking, databases, web applications, email spam filtering, biometric systems, security, cloud computing, and distributed systems due to their space-efficient and time-efficient properties. Bloom filters offer several advantages, particularly in storage compression and time-efficient data lookup. Additionally, the use of hashing ensures data security, i.e., if the BF is accessed by an unauthorized entity, no enrolled data can be reversed or traced back to the original content. In summary, BFs are powerful structures for storing data in a storage-efficient approach with low time complexity and high security. However, a disadvantage of the traditional Bloom filters is, they do not support dynamic operations, such as adding or deleting elements. Therefore, in this project, the idea of a Dynamic Bloom Filter has been demonstrated that offers the dynamicity feature that allows the addition or deletion of items. By integrating dynamic capabilities into Standard Bloom filters, their functionality, and robustness are enhanced, making them more suitable for several applications. For example, in a perpetual inventory system, inventory records are constantly updated after every inventory-related transaction, such as sales, purchases, or returns. In banking, dynamic data changes throughout the course of transactions. In the healthcare domain, hospitals can dynamically update and delete patients' medical histories.


Asadullah Khan

A Triad of Approaches for PCB Component Segmentation and Classification using U-Net, SAM, and Detectron2

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

The segmentation and classification of Printed Circuit Board (PCB) components offer multifaceted applications- primarily design validation, assembly verification, quality control optimization, and enhanced recycling processes. However, this field of study presents numerous challenges, mainly stemming from the heterogeneity of PCB component morphology and dimensionality, variations in packaging methodologies for functionally equivalent components, and limitations in the availability of image data. 

This study proposes a triad of approaches consisting of two segmentation-based and a classification-based architecture for PCB component detection. The first segmentation approach introduces an enhanced U-Net architecture with a custom loss function for improved multi-scale classification and segmentation accuracy. The second segmentation method leverages transfer learning, utilizing the Segment Anything Model (SAM) developed by Meta’s FAIR lab for both segmentation and classification. Lastly, Detectron2 with a ResNeXt-101 backbone, enhanced by Feature Pyramid Network (FPN), Region Proposal Network (RPN), and Region of Interest (ROI) Align has been proposed for multi-scale detection. The proposed methods are implemented on the FPIC dataset to detect the most commonly appearing components (resistor, capacitor, integrated circuit, LED, and button) in PCB. The first method outperforms existing state-of-the-art networks without pre-training, achieving a DICE score of 94.05%, an IoU score of 91.17%, and an accuracy of 94.90%. On the other hand, the second one surpasses both the previous state-of-the-art network and U-net in segmentation, attaining a DICE score of 97.08%, an IoU score of 93.95%, and an accuracy of 96.34%. Finally, the third one, being the first transfer learning-based approach to perform individual component classification on PCBs, achieves an average precision of 89.88%. Thus, the proposed triad of approaches will play a promising role in enhancing the robustness and accuracy of PCB quality assurance techniques.


Zeyan Liu

On the Security of Modern AI: Backdoors, Robustness, and Detectability

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Zijun Yao
John Symons

Abstract

The rapid development of AI has significantly impacted security and privacy, introducing both new cyber-attacks targeting AI models and challenges related to responsible use. As AI models become more widely adopted in real-world applications, attackers exploit adversarially altered samples to manipulate their behaviors and decisions. Simultaneously, the use of generative AI, like ChatGPT, has sparked debates about the integrity of AI-generated content.

In this dissertation, we investigate the security of modern AI systems and the detectability of AI-related threats, focusing on stealthy AI attacks and responsible AI use in academia. First, we reevaluate the stealthiness of 20 state-of-the-art attacks on six benchmark datasets, using 24 image quality metrics and over 30,000 user annotations. Our findings reveal that most attacks introduce noticeable perturbations, failing to remain stealthy. Motivated by this, we propose a novel model-poisoning neural Trojan, LoneNeuron, which minimally modifies the host neural network by adding a single neuron after the first convolution layer. LoneNeuron responds to feature-domain patterns that transform into invisible, sample-specific, and polymorphic pixel-domain watermarks, achieving a 100% attack success rate without compromising main task performance and enhancing stealth and detection resistance. Additionally, we examine the detectability of ChatGPT-generated content in academic writing. Presenting GPABench2, a dataset of over 2.8 million abstracts across various disciplines, we assess existing detection tools and challenges faced by over 240 evaluators. We also develop CheckGPT, a detection framework consisting of an attentive Bi-LSTM and a representation module, to capture subtle semantic and linguistic patterns in ChatGPT-generated text. Extensive experiments validate CheckGPT’s high applicability, transferability, and robustness.


Abhishek Doodgaon

Photorealistic Synthetic Data Generation for Deep Learning-based Structural Health Monitoring of Concrete Dams

When & Where:


LEEP2, Room 1415A

Committee Members:

Zijun Yao, Chair
Caroline Bennett
Prasad Kulkarni
Remy Lequesne

Abstract

Regular inspections are crucial for identifying and assessing damage in concrete dams, including a wide range of damage states. Manual inspections of dams are often constrained by cost, time, safety, and inaccessibility. Automating dam inspections using artificial intelligence has the potential to improve the efficiency and accuracy of data analysis. Computer vision and deep learning models have proven effective in detecting a variety of damage features using images, but their success relies on the availability of high-quality and diverse training data. This is because supervised learning, a common machine-learning approach for classification problems, uses labeled examples, in which each training data point includes features (damage images) and a corresponding label (pixel annotation). Unfortunately, public datasets of annotated images of concrete dam surfaces are scarce and inconsistent in quality, quantity, and representation.

To address this challenge, we present a novel approach that involves synthesizing a realistic environment using a 3D model of a dam. By overlaying this model with synthetically created photorealistic damage textures, we can render images to generate large and realistic datasets with high-fidelity annotations. Our pipeline uses NX and Blender for 3D model generation and assembly, Substance 3D Designer and Substance Automation Toolkit for texture synthesis and automation, and Unreal Engine 5 for creating a realistic environment and rendering images. This generated synthetic data is then used to train deep learning models in the subsequent steps. The proposed approach offers several advantages. First, it allows generation of large quantities of data that are essential for training accurate deep learning models. Second, the texture synthesis ensures generation of high-fidelity ground truths (annotations) that are crucial for making accurate detections. Lastly, the automation capabilities of the software applications used in this process provides flexibility to generate data with varied textures elements, colors, lighting conditions, and image quality overcoming the constraints of time. Thus, the proposed approach can improve the automation of dam inspection by improving the quality and quantity of training data.