Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Zhaohui Wang

Detection and Mitigation of Cross-App Privacy Leakage and Interaction Threats in IoT Automation

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to everyday life, enabling users to deploy automation rules and develop IoT apps tailored to their specific needs. However, modern IoT ecosystems consist of numerous devices, applications, and platforms that interact continuously. As a result, users are increasingly exposed to complex and subtle security and privacy risks that are difficult to fully comprehend. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats. In addition, violations of memory integrity can undermine the security guarantees on which IoT apps rely.

The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app interaction chains formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate risk levels based on inferences. In addition, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks.

The second approach addresses cross-app interaction threats in IoT automation systems by leveraging a logic-based analysis model grounded in event relations. We formalize event relationships, detect event interferences, and classify rule conflicts, then generate risk scores and conflict rankings to enable comprehensive conflict detection and risk assessment. To mitigate the identified interaction threats, an optimization-based approach is employed to reduce risks while preserving system functionality. This approach ensures comprehensive coverage of cross-app interaction threats and provides a robust solution for detecting and resolving rule conflicts in IoT environments.

To support the development and rigorous evaluation of these security analyses, we further developed a large-scale, manually verified, and comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. All proposed approaches are evaluated using this dataset of real-world apps, collectively offering valuable insights and practical tools for enhancing IoT security and privacy against cross-app threats. Furthermore, we examine the integrity of the execution environment that supports IoT apps. We show that, even under non-privileged execution, carefully crafted memory access patterns can induce bit flips in physical memory, allowing attackers to corrupt data and compromise system integrity without requiring elevated privileges.


Shawn Robertson

A Low-Power Low-Throughput Communications Solution for At-Risk Populations in Resource Constrained Contested Environments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

In resource‑constrained contested environments (RCCEs), communications are routinely censored, surveilled, or disrupted by nation‑state adversaries, leaving at‑risk populations—including protesters, dissidents, disaster‑affected communities, and military units—without secure connectivity. This dissertation introduces MeshBLanket, a Bluetooth Mesh‑based framework designed for low‑power, low‑throughput messaging with minimal electromagnetic spectrum exposure. Built on commercial off‑the‑shelf hardware, MeshBLanket extends the Bluetooth Mesh specification with automated provisioning and network‑wide key refresh to enhance scalability and resilience.

We evaluated MeshBLanket through field experimentation (range, throughput, battery life, and security enhancements) and qualitative interviews with ten senior U.S. Army communications experts. Thematic analysis revealed priorities of availability, EMS footprint reduction, and simplicity of use, alongside adoption challenges and institutional skepticism. Results demonstrate that MeshBLanket maintains secure messaging under load, supports autonomous key refresh, and offers operational relevance at the forward edge of battlefields.

Beyond military contexts, parallels with protest environments highlight MeshBLanket’s broader applicability for civilian populations facing censorship and surveillance. By unifying technical experimentation with expert perspectives, this work contributes a proof‑of‑concept communications architecture that advances secure, resilient, and user‑centric connectivity in environments where traditional infrastructure is compromised or weaponized.


Past Defense Notices

Dates

Jonathan Owen

Real-Time Cognitive Sense-and-Notch Radar

When & Where:


Nichols Hall, Room 129, Ron Evans Apollo Auditorium

Committee Members:

Shannon Blunt, Chair
Chris Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Spectrum sensing and transmit waveform frequency notching is a form of cognitive radar that seeks to reduce mutual interference with other spectrum users in a cohabitated band. With the reality of increasing radio frequency (RF) spectral congestion, radar systems capable of dynamic spectrum sharing are needed. The cognitive sense-and-notch (SAN) emission strategy is experimentally demonstrated as an effective way to reduce the interference that the spectrum sharing radar causes to other in-band users. The physical radar emission is based on a random FM waveform structure possessing attributes that are inherently robust to range-Doppler sidelobes. To contend with dynamic interference the transmit notch may be required to move during the coherent processing interval (CPI), which introduces a nonstationarity effect that results in increased residual clutter after cancellation. The nonstationarity effect is characterized and compensated for using computationally efficient processing methods. The steps from initial analysis of cognitive system performance to implementation of sense-and-notch radar spectrum sharing in real-time are discussed.


Nick Kellerman

A MISO Frequency Diverse Array Implementation

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Chris Allen
Shannon Blunt
James Stiles

Abstract

Estimating the spatial angle of arrival for a received radar signal traditionally entails measurements across multiple antenna elements. Spatially diverse Multiple Input Multiple Output (MIMO) emission structures, such as the Frequency Diverse Array (FDA), provide waveform separability to achieve spatial estimation without the need for multiple receive antenna elements. A low complexity Multiple Input Single Output (MISO) radar system leveraging the FDA emission structure coupled with the Linear Frequency Modulated Continuous Wave (LFMCW) waveform is experimentally demonstrated that estimates range, Doppler and spatial angle information of the illuminated scene using a single receiver antenna element. In comparison to well-known spatially diverse emission structures (i.e., Doppler Division Multiple Access (DDMA) and Time Division Multiple Access (TDMA)), LFMCW-FDA is shown to retain the full range and Doppler unambiguous spaces at the cost of a reduced range resolution. To combat the degraded range performance, an adaptive algorithm is introduced with initial results showing the ability to improve separability of closely spaced scatterers in range and angle. With the persistent illumination achieved by the emission structure, demonstrated performance, and low complexity architecture, the LFMCW-FDA system is shown to have attractive features for use in a low-resolution search radar context.


Christian Jones

Robust and Efficient Structure-Based Radar Receive Processing

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Shannon Blunt, Chair
Chris Allen
Suzanne Shontz
James Stiles
Zsolt Talata

Abstract

Legacy radar systems largely rely on repeated emission of a linear frequency modulated (LFM) or chirp waveform to ascertain scattering information from an environment. The prevalence of these chirp waveforms largely stems from their simplicity to generate, process, and the general robustness they provide towards hardware effects. However, this traditional design philosophy often lacks the flexibility and dimensionality needed to address the dynamic “complexification” of the modern radio frequency (RF) environment or achieve current operational requirements where unprecedented degrees of sensitivity, maneuverability, and adaptability are necessary.

Over the last couple of decades analog-to-digital and digital-to-analog technologies have advanced exponentially, resulting in tremendous design degrees of freedom and arbitrary waveform generation (AWG) capabilities that enable sophisticated design of emissions to better suit operational requirements. However, radar systems typically require high powered amplifiers (HPA) to contend with the two-way propagation. Thus, transmitter-amenable waveforms are effectively constrained to be both spectrally contained and constant amplitude, resulting in a non-convex NP-hard design problem.

While determining the global optimal waveform can be intractable for even modest time-bandwidth products (TB), locally optimal transmitter-amenable solutions that are “good enough” are often readily available. However, traditional matched filtering may not satisfy operational requirements for these sub-optimal emissions. Using knowledge of the transmitter-receiver chain, a discrete linear model can be formed to express the relationship between observed measurements and the complex scattering of the environment. This structured representation then enables more sophisticated least-square and adaptive estimation techniques to better satisfy operational needs, improve estimate fidelity, and extend dynamic range.

However, radar dimensionality can be enormous and brute force implementations of these techniques may have unwieldy computational burden on even cutting-edge hardware. Additionally, a discrete linear representation is fundamentally an approximation of the dynamic continuous physical reality and model errors may induce bias, create false detections, and limit dynamic range. As such, these structure-based approaches must be both computationally efficient and robust to reality.

Here several generalized discrete radar receive models and structure-based estimation schemes are introduced. Modifications and alternative solutions are then proposed to improve estimate fidelity, reduce computational complexity, and provide further robustness to model uncertainty.


Archana Chalicheemala

A Machine Learning Study using Gene Expression Profiles to Distinguish Patients with Non-Small Cell Lung Cancer

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Early diagnosis can effectively treat non-small cell lung cancer (NSCLC). Lung cancer cells usually have altered gene expression patterns compared to normal cells, which can be utilized to predict cancer through gene expression tests. This study analyzed gene expression values measured from 15227-probe microarray, and 290 patients consisting of cancer and control groups, to find relations between the gene expression features and lung cancer. The study explored k-means, statistical tests, and deep neural networks to obtain optimal feature representations and achieved the highest accuracy of 82%. Furthermore, a bipartite graph was built using the Bio Grid database and gene expression values, where the probe-to-probe relationship based on gene relevance was leveraged to enhance the prediction performance.


Yoganand Pitta

Insightful Visualization: An Interactive Dashboard Uncovering Disease Patterns in Patient Healthcare Data

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

As Electronic Health Records (EHRs) become more available, there is increasing interest in discovering hidden disease patterns by leveraging cutting-edge data visualization techniques, such as graph-based knowledge representation and interactive graphical user interfaces (GUIs). In this project, we have developed a web-based interactive EHR analytics and visualization tool to provide healthcare professionals with valuable insights that can ultimately improve the quality and cost-efficiency of patient care. Specifically, we have developed two visualization panels: one for the intelligence of individual patients and the other for the relevance among diseases. For individual patients, we capture the similarity between them by linking them based on their relatedness in diagnosis. By constructing a graph representation of patients based on this similarity, we can identify patterns and trends in patient data that may not be apparent through traditional methods. For disease relationships, we provide an ontology graph for the specific diagnosis (ICD10 code), which helps to identify ancestors and predecessors of a particular diagnosis. Through the demonstration of this dashboard, we show that this approach can provide valuable insights to better understand patient outcomes with an informative and user-friendly web interface.

 


Brandon Ravenscroft

Spectral Cohabitation and Interference Mitigation via Physical Radar Emissions

When & Where:


Nichols Hall, Room 129, Ron Evans Apollo Auditorium

Committee Members:

Shannon Blunt, Chair
Chris Allen
Erik Perrins
James Stiles
Chris Depcik

Abstract

Auctioning of frequency bands to support growing demand for high bandwidth 5G communications is driving research into spectral cohabitation strategies for next generation radar systems. The loss of radio frequency (RF) spectrum once designated for radar operation is forcing radar systems to either learn how to coexist in these frequency spectrum bands, without causing mutual interference, or move to other bands of the spectrum, the latter being the more undesirable choice. Two methods of spectral cohabitation are presented in this work, each taking advantage of recent developments in non-repeating, random FM (RFM) waveforms. RFM waveforms are designed via one of many different optimization procedures to have favorable radar waveform properties while also readily incorporating agile spectrum notches. The first method of spectral cohabitation uses these spectral notches to avoid narrow-band RF interference (RFI) in the form of other spectrum users residing in the same band as the radar system, allowing both to operate while minimizing mutual interference. The second method of spectral cohabitation uses spectral notches, along with an optimization procedure, to embed a communications signal into a dual-function radar/communications (DFRC) emission, allowing one waveform to serve both functions simultaneously. Results of simulation and open-air experimentation with physically realized, spectrally notched and DFRC emissions are shown which demonstrate the efficacy of these two methods of spectral cohabitation.


Divya Harshitha Challa

Crop Prediction Based on Soil Classification using Machine Learning with Classifier Ensembling

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hongyang Sun


Abstract

Globally, agriculture is the most significant source, which is the backbone of any country, and is an emerging field of research these days. There are many different types of soil, and each type has different characteristics for crops. Different methods and models are used daily in this region to increase yields. The macronutrient and micronutrient content of the soil, which is also a parametric representation of various climatic conditions like rain, humidity, temperature, and the soil's pH, is largely responsible for the crop's growth. Consequently, farmers are unable to select the appropriate crops depending on environmental and soil factors. The method of manually predicting the selection of the appropriate crops on land has frequently failed. We use machine learning techniques in this system to recommend crops based on soil classification or soil series. A comparative analysis of several popular classification algorithms, including K-Nearest Neighbors (KNN), Random Forest (RF), Decision Tree (DT), Support Vector Machines (SVM), Gaussian Naive Bayes (GNB), Gradient Boosting (GB), Extreme Gradient Boosting (XGBoost), and Voting Ensemble classifiers, is carried out in this work to assist in recommending the cultivable crop(s) that are most suitable for a particular piece of land depending on the characteristics of the soil and environment. To achieve our goal, we collected and preprocessed a large dataset of crop yield and environmental data from multiple sources. Our results show that the voting ensemble classifier outperforms the other classifiers in terms of prediction accuracy, achieving an accuracy of 94.67%. Feature importance analysis reveals that weather conditions such as temperature and rainfall, and fertilizer usage are the most critical factors in predicting crop yield. 


Oluwanisola Ibikunle

DEEP LEARNING ALGORITHMS FOR RADAR ECHOGRAM LAYER TRACKING

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Shannon Blunt, Chair
John Paden (Co-Chair)
Carl Leuschen
Jilu Li
James Stiles

Abstract

The accelerated melting of ice sheets in the polar regions of the world, specifically in Greenland and Antarctica, due to contemporary climate warming is contributing to global sea level rise. To understand and quantify this phenomenon, airborne radars have been deployed to create echogram images that map snow accumulation patterns in these regions. Using advanced radar systems developed by the Center for Remote Sensing and Integrated Systems (CReSIS), a significant amount (1.5 petabytes) of climate data has been collected. However, the process of extracting ice phenomenology information, such as accumulation rate, from the data is limited. This is because the radar echograms require tracking of the internal layers, a task that is still largely manual and time-consuming. Therefore, there is a need for automated tracking.

Machine learning and deep learning algorithms are well-suited for this problem given their near-human performance on optical images. Moreover, the significant overlap between classical radar signal processing and machine learning techniques suggests that fusion of concepts from both fields can lead to optimized solutions for the problem. However, supervised deep learning algorithms suffer the circular problem of first requiring large amounts of labeled data to train the models which do not exist currently.

In this work, we propose custom algorithms, including supervised, semi-supervised, and self-supervised approaches, to deal with the limited annotated data problem to achieve accurate tracking of radiostratigraphic layers in echograms. Firstly, we propose an iterative multi-class classification algorithm, called “Row Block,” which sequentially tracks internal layers from the top to the bottom of an echogram given the surface location. We aim to use the trained iterative model in an active learning paradigm to progressively increase the labeled dataset. We also investigate various deep learning semantic segmentation algorithms by casting the echogram layer tracking problem as a binary and multiclass classification problem. These require post-processing to create the desired vector-layer annotations, hence, we propose a custom connected-component algorithm as a post-processing routine. Additionally, we propose end-to-end algorithms that avoid the post-processing to directly create annotations as vectors. Furthermore, we propose semi-supervised algorithms using weakly-labeled annotations and unsupervised algorithms that can learn the latent distribution of echogram snow layers while reconstructing echogram images from a sparse embedding representation.

A concurrent objective of this work is to provide the deep learning and science community with a large fully-annotated dataset. To achieve this, we propose synchronizing radar data with outputs from a regional climate model to provide a dataset with overlapping measurements that can enhance the performance of the trained models.


Jonathan Rogers

Faster than Thought Error Detection Using Machine Learning to Detect Errors in Brain Computer Interfaces

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Suzanne Shontz, Chair
Adam Rouse
Cuncong Zhong


Abstract

This research thesis seeks to use machine learning on data from invasive brain-computer interfaces (BCIs) in rhesus macaques to predict their state of movement during center-out tasks. Our research team breaks down movements into discrete states and analyzes the data using Linear Discriminant Analysis (LDA). We find that a simplified model that ignores the biological systems unpinning it can still detect the discrete state changes with a high degree of accuracy. Furthermore, when we account for underlying systems, our model achieved high levels of accuracy at speeds that ought to be imperceptible to the primate brain.


Abigail Davidow

Exploring the Gap Between Privacy and Utility in Automated Decision-Making

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Fengjun Li
Alexandra Kondyli


Abstract

The rapid rise of automated decision-making systems has left a gap in researchers’ understanding of how developers and consumers balance concerns about the privacy and accuracy of such systems against their utility.  With our goal to cover a broad spectrum of concerns from various angles, we initiated two experiments on the perceived benefit and detriment of interacting with automated decision-making systems. We refer to these two experiments as the Patch Wave study and Automated Driving study. This work approaches the study of automated decision making at different perspectives to help address the gap in empirical data on consumer and developer concerns. In our Patch Wave study, we focus on developers’ interactions with automated pull requests that patch widespread vulnerabilities on GitHub. The Automated Driving study explores older adults’ perceptions of data privacy in highly automated vehicles. We find quantitative and qualitative differences in the way that our target populations view automated decision-making systems compared to human decision-making. In this work, we detail our methodology for these studies, experimental results, and recommendations for addressing consumer and developer concerns.