Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Pardaz Banu Mohammad

Towards Early Detection of Alzheimer’s Disease based on Speech using Reinforcement Learning Feature Selection

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Arvin Agah, Chair
David Johnson
Sumaiya Shomaji
Dongjie Wang
Sara Wilson

Abstract

Alzheimer’s Disease (AD) is a progressive, irreversible neurodegenerative disorder and the leading cause of dementia worldwide, affecting an estimated 55 million people globally. The window of opportunity for intervention is demonstrably narrow, making reliable early-stage detection a clinical and scientific imperative. While current diagnostic techniques such as neuroimaging and cerebrospinal fluid (CSF) biomarkers carry well-defined limitations in scalability, cost, and access equity, speech has emerged as a compelling non-invasive proxy for cognitive function evaluation.

This work presents a novel approach for using acoustic feature selection as a decision-making technique and implements it using deep reinforcement learning. Specifically, we use a Deep-Q-Network (DQN) agent to navigate a high dimensional feature space of over 6,000 acoustic features extracted using the openSMILE toolkit, dynamically constructing maximally discriminative and non-redundant features subsets. In order to capture the latent structural dependencies among

acoustic features which classifier and wrapper methods have difficulty to model, we introduce the Graph Convolutional Network (GCN) based correlation awareness feature representation layer that operates as an auxiliary input to the DQN state encoder. Post selection interpretability is reinforced through TF-IDF weighting and K-means clustering which together yield both feature level and cluster level explanations that are clinically actionable. The framework is evaluated across five classifiers, namely, support vector machines (SVM), logistic regression, XGBoost, random forest, and feedforward neural network. We use 10-fold stratified cross-validation on established benchmarks of datasets, including DementiaBank Pitt Corpus, Ivanova, and ADReSS challenge data. The proposed approach is benchmarked against state-of-the-art feature selection methods such as LASSO, Recursive feature selection, and mutual information selectors. This research contributes to three primary intellectual advances: (1) a graph augmented state representation that encodes inter-feature relational structure within a reinforcement learning agent, (2) a clinically interpretable pipeline that bridges the gap between algorithmic performance and translational utility, and (3) multilingual data approach for the reinforcement learning agent framework. This study has direct implications for equitable, low-cost and scalable AD screening in both clinical and community settings.


Arnab Mukherjee

Attention-Based Solutions for Occlusion Challenges in Person Tracking

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Sumaiya Shomaji
Hongyang Sun
Jian Li

Abstract

Person re-identification (Re-ID) and multi-object tracking in unconstrained surveillance environments pose significant challenges within the field of computer vision. These complexities stem mainly from occlusion, variability in appearance, and identity switching across various camera views. This research outlines a comprehensive and innovative agenda aimed at tackling these issues, employing a series of increasingly advanced deep learning architectures, culminating in a groundbreaking occlusion-aware Vision Transformer framework.

At the heart of this work is the introduction of Deep SORT with Multiple Inputs (Deep SORT-MI), a cutting-edge real-time Re-ID system featuring a dual-metric association strategy. This strategy adeptly combines Mahalanobis distance for motion-based tracking with cosine similarity for appearance-based re-identification. As a result, this method significantly decreases identity switching compared to the baseline SORT algorithm on the MOT-16 benchmark, thereby establishing a robust foundation for metric learning in subsequent research.

Expanding on this foundation, a novel pose-estimation framework integrates 2D skeletal keypoint features extracted via OpenPose directly into the association pipeline. By capturing the spatial relationships among body joints along with appearance features, this system enhances robustness against posture variations and partial occlusion. Consequently, it achieves substantial reductions in false positives and identity switches compared to earlier methods, showcasing its practical viability.

Furthermore, a Diverse Detector Integration (DDI) study meticulously assessed the influence of detector choices—including YOLO v4, Faster R-CNN, MobileNet SSD v2, and Deep SORT—on the efficacy of metric learning-based tracking. The results reveal that YOLO v4 consistently delivers exceptional tracking accuracy on both the MOT-16 and MOT-17 datasets, establishing its superiority in this competitive landscape.

In conclusion, this body of research notably advances occlusion-aware person Re-ID by illustrating a clear progression from metric learning to pose-guided feature extraction and ultimately to transformer-based global attention modeling. The findings underscore that lightweight, meticulously parameterized Vision Transformers can achieve impressive generalization for occlusion detection, even under constrained data scenarios. This opens up exciting prospects for integrated detection, localization, and re-identification in real-world surveillance systems, promising to enhance their effectiveness and reliability.


Ertewaa Saud Alsahayan

Toward Reliable LLM-Assisted Design Space Exploration under Performance, Cost, and Dependability Constraints

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Tamzidul Hoque, Chair
Prasad Kulkarni
Sumaiya Shomaji
Hongyang Sun
Huijeong Kim

Abstract

Architectural design space exploration (DSE) requires navigating large configuration spaces while satisfying multiple conflicting objectives, including performance, cost, and system dependability. Large language models (LLMs) have shown promise in assisting DSE by proposing candidate designs and interpreting simulation feedback. However, extending LLM-based DSE to realistic multi-objective settings introduces structural challenges. A naive multi-objective extension of prior LLM-based DSE approaches, which we term Co-Pilot2, exhibits reasoning instability, candidate degeneration, feasibility violations, and lack of progressive improvement. These limitations arise not from insufficient model capacity, but from the absence of structured control, verification, and decision integrity within the exploration process. 

To address these challenges, this research introduces REMODEL, a structured LLM-controlled DSE framework that transforms free-form reasoning into a constrained, verifiable, and iterative optimization process. REMODEL incorporates candidate pooling across parallel reasoning instances, strict state isolation via history snapshotting, deterministic feasibility verification, canonical design representation and deduplication, explicit decision stages, and structured reasoning to enforce complete parameter coverage and consistent trend analysis. These mechanisms enable reliable and stable exploration under complex multi-objective constraints. 

To support dependability-aware evaluation, the framework is integrated with cycle-accurate simulation using gem5 and its reliability-focused extension GemV, enabling detailed analysis of performance, power, and fault tolerance through vulnerability metrics. This integration allows the system to reason not only about performance–cost trade-offs, but also about reliability-aware design decisions under realistic execution conditions. 

Experimental evaluation demonstrates that REMODEL identifies near-optimal designs within a small number of simulations, achieving significantly higher solution quality per simulation compared to baseline methods such as random search and genetic algorithms, while maintaining low computational overhead. 

This work establishes a foundation for dependable LLM-assisted DSE by incorporating reliability constraints into the exploration loop. As a future direction, this framework will be extended to incorporate security-aware design considerations, enabling unified reasoning over performance, cost, reliability, and system security. 


Bretton Scarbrough

Structured Light for Particle Manipulation: Hologram Generation and Optical Binding Simulation

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shima Fardad, Chair
Rongqing Hui
Alessandro Salandrino


Abstract

This thesis addresses two related problems in the optical manipulation of microscopic particles: the efficient generation of holograms for holographic optical tweezers and the simulation of multi-particle optical binding. Holographic optical tweezers use phase-only spatial light modulators to create programmable optical trapping fields, enabling dynamic control over the number, position, and relative strength of optical traps. Because the quality of the trapping field depends strongly on the computed hologram, the first part of this work focuses on improving hologram-generation methods used in these systems.

A new phase-induced compressive sensing algorithm is presented for holographic optical tweezers, along with weighted and unweighted variants. These methods are developed from the Gerchberg-Saxton framework and are designed to improve computational efficiency while preserving favorable trapping characteristics such as uniformity and optical efficiency. By combining compressive sensing with phase induction, the proposed algorithms reduce the computational burden associated with iterative hologram generation while maintaining strong performance across a variety of trapping arrangements. Comparative simulations are used to evaluate these methods against several established hologram-generation algorithms, and the results show that the proposed approaches offer meaningful improvements in convergence behavior and overall performance.

The second part of this thesis examines optical binding, a phenomenon in which multiple particles interact through both the incident optical field and the fields scattered by neighboring particles. To study this process, a numerical simulation is developed that incorporates gradient forces, radiation pressure, and light-mediated particle-particle interactions in both two- and three-dimensional configurations. The simulation is used to investigate how particles evolve under different initial conditions and illumination states, and how collective effects influence the formation of stable or semi-stable arrangements. These results provide insight into the role of scattering-mediated forces in many-particle optical systems and highlight differences between two-dimensional and three-dimensional behavior.

Although hologram generation and optical binding are treated as separate problems in this work, they are connected by a common goal: understanding how structured optical fields can be designed and applied to control microscopic matter. Together, the results of this thesis contribute to the broader study of computational beam shaping and many-body optical interactions, with relevance to advanced optical trapping, particle organization, and dynamically reconfigurable light-driven systems.


Sai Rithvik Gundla

Beyond Regression Accuracy: Evaluating Runtime Prediction for Scheduling Input Sensitive Workloads

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Arvin Agah
David Johnson


Abstract

Runtime estimation plays a structural role in reservation-based scheduling for High Performance Computing (HPC) systems, where predicted walltimes directly influence reservation timing, backfilling feasibility, and overall queue dynamics. This raises a fundamental question of whether improved runtime prediction accuracy necessarily translates into improved scheduling performance. In this work, we conduct an empirical study of runtime estimation under EASY Backfilling using an application-driven workload consisting of MRI-based brain segmentation jobs. Despite identical configurations and uniform metadata, runtimes exhibit substantial variability driven by intrinsic input structure. To capture this variability, we develop a feature-driven machine learning (ML) framework that extracts region-wise features from MRI volumes to predict job runtimes without relying on historical execution traces or scheduling metadata. We integrate these ML-derived predictions into an EASY Backfilling scheduler implemented in the Batsim simulation framework. Our results show that regression accuracy alone does not determine scheduling performance. Instead, scheduling performance depends strongly on estimation bias and its effect on reservation timing and runtime exceedances. In particular, mild multiplicative calibration of ML-based runtime estimates stabilizes scheduler behavior and yields consistently competitive performance across workload and system configurations. Comparable performance can also be observed with certain levels of uniform overestimation; however, calibrated ML predictions provide a systematic mechanism to control estimation bias without relying on arbitrary static inflation. In contrast, underestimation consistently leads to severe performance degradation and cascading job terminations. These findings highlight runtime estimation as a structural control input in backfilling-based HPC scheduling and demonstrate the importance of evaluating prediction models jointly with scheduling dynamics rather than through regression metrics alone.


Pavan Sai Reddy Pendry

BabyJay - A RAG Based Chatbot for the University of Kansas

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The University of Kansas maintains hundreds of departmental and unit websites, leaving students without a unified way to find information. General-purpose chatbots hallucinate KU-specific facts, and static FAQ pages cannot hold a conversation. This work presents BabyJay, a Retrieval-Augmented Generation chatbot that answers student questions using content scraped from official KU sources, with inline citations on every response. The pipeline combines query preprocessing and decomposition, an intent classifier that routes most queries to fast JSON lookups, hybrid retrieval (BM25 and ChromaDB vector search merged via Reciprocal Rank Fusion), a cross-encoder re-ranker, and generation by Claude Sonnet 4.6 under a context-only system prompt. Evaluation on 46 question-answer pairs across five difficulty tiers and eight domains produced a composite score of 0.72, entity precision of 93%, and zero runtime errors. Retrieval, rather than generation, emerged as the primary bottleneck, motivating future work on multi-domain query handling.


Ye Wang

Toward Practical and Stealthy Sensor Exploitation: Physical, Contextual, and Control-Plane Attack Paradigms

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Rongqing Hui
Bo Luo
Haiyang Chao

Abstract

Modern intelligent systems increasingly rely on continuous sensor data streams for perception, decision-making, and control, making sensors a critical yet underexplored attack surface. While prior research has demonstrated the feasibility of sensor-based attacks, recent advances in mobile operating systems and machine learning-based defenses have significantly reduced their practicality, rendering them more detectable, resource-intensive, and constrained by evolving permission and context-aware security models.

This dissertation revisits sensor exploitation under these modern constraints and develops a unified, cross-layer perspective that improves both practicality and stealth of sensor-enabled attacks. We identify three fundamental challenges: (i) the difficulty of reliably manipulating physical sensor signals in noisy, real-world environments; (ii) the effectiveness of context-aware defenses in detecting anomalous sensor behavior on mobile devices, and (iii) the lack of lightweight coordination for practical sensor-based side- and covert-channels.

To address the first challenge, we propose a physical-domain attack framework that integrates signal modeling, simulation-guided attack synthesis, and real-time adaptive targeting, enabling robust adversarial perturbations with high attack success rates even under environmental uncertainty. As a case study, we demonstrate an infrared laser-based adversarial example attack against face recognition systems, which achieves consistently high success rates across diverse conditions with practical execution overhead.

To improve attack stealth against context-aware defenses, we introduce an auto-contextualization mechanism that synchronizes malicious sensor actuation with legitimate application activity. By aligning injected signals with both statistical patterns and semantic context of benign behavior, the approach renders attacks indistinguishable from normal system operations and benign sensor usage. We validate this design using three Android logic bombs, showing that auto-contextualized triggers can evade both rule-based and learning-based detection mechanisms.

Finally, we extend sensor exploitation beyond the traditional attack-channel plane by introducing a lightweight control-plane protocol embedded within sensor data streams. This protocol encodes control signals directly into sensor observations and leverages simple signal-processing primitives to coordinate multi-stage attacks without relying on privileged APls or explicit inter-process communication. The resulting design enables low-overhead, stealthy coordination of cross-device side- and covert-channels.

Together, these contributions establish a new paradigm for sensor exploitation that spans physical, contextual, and control-plane dimensions. By bridging these layers, this dissertation demonstrates that sensor-based attacks remain not only feasible but also practical and stealthy in modern computer systems.


Jamison Bond

Mutual Coupling Array Calibration Utilizing Decomposition of Modeled Scattering Matrix

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Carl Leuschen


Abstract

***Currently being reviewed, unavailable***


Kevin Likcani

Use of Machine Learning to Predict Drug Court Success

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Substance use remains a major public health issue in the United States that significantly impacts individuals, families, and society. Many individuals who suffer from substance use disorder (SUD) face incarceration due to drug-related offenses. Drug courts have emerged as an alternative to imprisonment and offer the opportunity for individuals to participate in a drug rehabilitation program instead. Drug courts mainly focus on those with non-violent drug-related offenses. One of the challenges of decision making in drug courts is assessing the likelihood of participants graduating from the drug court and avoiding recidivism after graduation. This study investigates the use of machine learning models to predict success in drug courts using data from a substance use drug court in Missouri. Success is measured in terms of graduation from the program, and the model includes a wide range of potential predictors including demographic characteristics, family and social factors, substance use history, legal involvement, physical and mental health history, employment history as well as drug court participation data. The results will be beneficial to drug court teams and presiding judges in predicting client success, evaluating risk factors during treatment for participants, informing person-centered treatment planning, and the development of after-care plans for high-risk participants to reduce the likelihood of recidivism. 


Peter Tso

Implementation of Free-Space Optical Networks based on Resonant Semiconductor Saturable Absorber and Phase Light Modulator

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad


Abstract

Optical Neural Networks (ONNs) have gained traction as an alternative to the conventional computing architectures used in modern CPUs and GPUs, largely because light enables massive parallelism, ultrafast inference, and minimal power consumption. 

As with conventional deep neural networks (DNNs), free-space ONNs require two main layers: (1) a nonlinear activation function which exists to separate adjacent linear layers, and (2) weighting layers that applies a linear transformation given an input.

Firstly, a Resonant Semiconductor Saturable Absorption Mirror (RSAM) was investigated as a viable nonlinear activation function. Several mechanisms have been used to create nonlinear activation functions, such as cold atoms, vapor absorption cells, and polaritons, but these implementations are bulky and must operate under tightly controlled environments while RSAMs is a passive device. Compared to typical SESAMs, the resonance structure of RSAM also reduces the saturation fluence compared to non-resonant SAMs, allowing low power laser sources to be used. A fiber-based optical testbed demonstrated notable improvement of 8.1% in classification accuracy compared to a linear only network trained with the MNIST dataset.

Secondly, Micro-electromechanical-system-based phase light modulators (PLMs) were evaluated as an alternative to LC-SLMs for in-situ reinforcement learning. PLMs can operate at kilohertz-scale frame rates at a substantially lower cost compared to LC-SLMs but have lower phase resolution and non-uniform quantization which impacts fidelity. Despite these disadvantages, the high-speed nature of PLMs allows for significant decrease in optimization time, which not only allows for reduction in training time, but also allows for larger datasets and more complex models with more learnable parameters. A single layer optical network was implemented using policy-based learning with discrete action-space to minimize impact of quantization. The testbed achieves 90.1%, 79.7%, and 76.9% training, validation, and test accuracy, respectively, on 3,000 images from the MNIST dataset. Additionally, we achieved 79.9%, 72.1%, and 71.7% accuracy on 3,000 images from the Fashion MNIST dataset. At 14 minutes per epoch during training, it is at least a magnitude lower in training time compared to LC-SLMs based models.


Joseph Vinduska

Fault-Frequency Agnostic Checkpointing Strategies

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Arvin Agah
Drew Davidson


Abstract

Checkpointing strategies in high-performance computing traditionally employ the Young-Daly for-

mula to determine the (first-order) optimal duration between checkpoints, which assumes a known

mean time between faults (MTBF). However, in practice, the MTBF may not be known accurately

or may vary, causing Young-Daly checkpointing to perform sub-optimally. In 2021, Sigdel et al.

introduced the CHORE (CHeckpointing Overhead and Rework Equated) checkpointing strategy,

which is MTBF-agnostic yet demonstrates a bounded increase in overhead compared to the op-

timal strategy. This thesis analyzed and extends the CHORE framework in several ways. First,

it verifies Sigdel et al.’s claims about the relative overhead of the CHORE strategy through both

event-driven simulations and expected runtimes derived from the underlying probablistic model.

Second, it extends the CHORE strategy to silent errors, which must be deliberately checked for to

be detected. In this scenario, the overhead compared to optimal checkpointing is once more ana-

lyzed through simulations and expected runtimes. Third, a heuristic is proposed to offer improved

performance of the CHORE algorithm under typical runtime scenarios by interpreting CHORE as

an additive-increase multiplicative-decrease model and tuning the parameters.


Lee Taylor

Ultrawideband Single-Pass Interferometric SAR Integrated with Multi-Rotor UAV

When & Where:


Nichols Hall, Room 317 (Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Shannon Blunt
Patrick McCormick
John Paden
Fernando Rodriguez-Morales

Abstract

Ultrawideband (UWB) Interferometric Synthetic Aperture Radar (InSAR) integrated with multi-rotor Uncrewed Aerial Vehicle (UAV), or UIMU in this work for brevity, provides ultrafine-resolution, all-weather, 3D surface imagery at any time of day. UIMU can be rapidly deployable and low-cost, and therefore a critical new tool for low-altitude remote sensing applications, such as disaster response, environmental monitoring, and intelligence surveillance and reconnaissance (ISR). Traditional repeat-pass data collection methods reduce the phase coherence required for InSAR processing of ultrafine-resolution datasets due to the unstable flight behavior of multi-rotor UAVs. Collecting Synthetic Aperture Radar (SAR) datasets using two receive channels during a single-pass will improve phase coherence and the ability to produce ultrafine-resolution 3D InSAR imagery.

This work proposes to quantify and characterize 3D target-position accuracy for a dual-channel 6 GHz bandwidth (2 cm range resolution) frequency modulated continuous wave (FMCW) radar integrated with the Aurela X6 hexacopter to establish novel single-pass UWB InSAR data collection methods and processing algorithms for multi-rotor UAV. The feasibility of the proposed investigation is demonstrated by the preliminary qualitative analysis of single-pass InSAR imagery presented in this proposal. Fieldwork will be conducted to measure the positions of GPS located corner reflectors using the UIMU system. Algorithms for motion tolerant Time-Domain Backprojection (TDBP), InSAR coregistration, and digital elevation mapping novel to multi-rotor UAV at UWB will be developed and presented. An analysis of vehicle motion induced phase decoherence, and InSAR imagery signal to noise ratio (SNR) will be presented. The TDBP SNR performance will be compared to the Open Polar Radar Omega-K algorithm to attempt to quantify motion tolerance between the different SAR processing algorithms.

This work will establish a foundation for future investigations of real-time image processing, separated transmission and receive platforms (bistatic), or swarm configurations for UIMU systems.


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Devin Setiawan

Concept-Driven Interpretability in Graph Neural Networks: Applications in Neuroscientific Connectomics and Clinical Motor Analysis

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Sankha Guria
Han Wang


Abstract

Graph Neural Networks (GNNs) achieve state-of-the-art performance in modeling complex biological and behavioral systems, yet their "black-box" nature limits their utility for scientific discovery and clinical translation. Standard post-hoc explainability methods typically attribute importance to low-level features, such as individual nodes or edges, which often fail to map onto the high-level, domain-specific concepts utilized by experts. To address this gap, this thesis explores diverse methodological strategies for achieving Concept-Level Interpretability in GNNs, demonstrating how deep learning models can be structurally and analytically aligned with expert domain knowledge. This theme is explored through two distinct methodological paradigms applied to critical challenges in neuroscience and clinical psychology. First, we introduce an interpretable-by-design approach for modeling brain structure-function coupling. By employing an ensemble of GNNs conceptually biased via input graph filtering, the model enforces verifiably disentangled node embeddings. This allows for the quantitative testing of specific structural hypotheses, revealing that a minority of strong anatomical connections disproportionately drives functional connectivity predictions. Second, we present a post-hoc conceptual alignment paradigm for quantifying atypical motor signatures in Autism Spectrum Disorder (ASD). Utilizing a Spatio-Temporal Graph Autoencoder (STGCN-AE) trained on normative skeletal data, we establish an unsupervised anomaly detection system. To provide clinical interpretability, the model's reconstruction error is systematically aligned with a library of human-interpretable kinematic features, such as postural sway and limb jerk. Explanatory meta-modeling via XGBoost and SHAP analysis further translates this abstract loss into a multidimensional clinical signature. Together, these applications demonstrate that integrating concept-level interpretability through either architectural design or systematic post-hoc alignment enables GNNs to serve as robust tools for hypothesis testing and clinical assessment.


Mahmudul Hasan

Trust Assurance of Commercial Off-The-Shelf (COTS) Hardware Through Verification and Runtime Resilience

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Tamzidul Hoque, Chair
Esam El-Araby
Prasad Kulkarni
Hongyang Sun
Huijeong Kim

Abstract

The adoption of Commercial off-the-shelf (COTS) components has become a dominant paradigm in modern system design due to their reduced development cost, faster time-to-market, and widespread availability. However, the reliance on globally distributed and untrusted supply chains introduces significant security risks, particularly the possibility of malicious hardware modifications such as Trojans, embedded during design or fabrication. In such settings, traditional methods that depend on golden models, full design visibility, or trusted fabrication are no longer sufficient, creating the need for new security assurance approaches under a zero-trust model. This proposed research addresses security challenges in COTS microprocessors through two complementary solutions: runtime resilience and pre-deployment trust verification. First, a multi-variant-execution-based framework is developed that leverages functionally equivalent program variants to induce diverse microarchitectural execution patterns. By comparing intermediate outputs across variants, the framework enables runtime detection and tolerance of Trojan induced payload effects without requiring hardware redundancy or architectural modifications. To enhance the effectiveness of variant generation, a reinforcement learning assisted framework is introduced, in which the reward function is defined by security objectives rather than traditional performance optimization, enabling the generation of variants that are more robust against repeated Trojan activation. Second, to enable black-box trust verification prior to deployment, this work presents a framework that can efficiently test the presence of hardware Trojans by identifying microarchitectural rare events and transferring activation knowledge from existing processor designs to trigger highly susceptible internal nodes. By leveraging ISA-level knowledge, open-source RTL references, and LLM-guided test generation, the framework achieves high trigger coverage without requiring access to proprietary designs or golden references. Building on these two scenarios, a future research direction is outlined for evolving trust in COTS hardware through continuous runtime observation, where multi-variant execution is extended with lightweight monitoring mechanisms that capture key microarchitectural events and execution traces. These observations are accumulated as hardware trust counters, enabling the system to progressively establish confidence in the underlying hardware by verifying consistent behavior across diverse execution patterns over time. Together, these directions establish a foundation for analyzing and mitigating security risks across zero-trust COTS supply chains.


Mohsen Nayebi Kerdabadi

Representation Augmentation for Electronic Health Records via Knowledge Graphs, Large Language Models, and Contrastive Learning

When & Where:


Learned Hall, Room 3150

Committee Members:

Zijun Yao, Chair
Sumaiya Shomaji
Hongyang Sun
Dongjie Wang
Shawn Keshmiri

Abstract

Electronic Health Records (EHRs) provide rich longitudinal patient information, but their high dimensionality, sparsity, heterogeneity, and temporal complexity make robust representation learning difficult. This dissertation studies how to improve patient and medical concept representation learning in EHRs and consequently enhance healthcare predictive tasks by integrating domain knowledge, knowledge graphs, large language models (LLMs), and contrastive learning. First, it introduces an ontology-aware temporal contrastive framework for survival analysis that learns discriminative patient representations from censored and observed trajectories by modeling temporal distinctiveness in longitudinal EHR data. Second, it proposes a multi-ontology representation learning framework that jointly propagates knowledge within and across diagnosis, medication, and procedure ontologies, enabling richer medical concept embeddings, especially under limited data and for rare conditions. Third, it develops an LLM-enriched, text-attributed medical knowledge graph framework that combines EHR-derived statistical evidence with type-constrained LLM reasoning to infer semantic relations, generate contextual node and edge descriptions, and co-learn concept embeddings through joint language-model and graph-neural-network training. Together, these studies advance a unified view of EHR representation learning in which structured medical knowledge, textual semantics, and temporal patient trajectories are jointly leveraged to build more accurate, interpretable, and robust healthcare prediction models.


Moh Absar Rahman

Permissions vs Promises: Assessing Over-privileged Android Apps via Local LLM-based Description Validation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Sankha Guria
David Johnson


Abstract

Android is the most widely adopted mobile operating system, supporting billions of devices and driven by a robust app ecosystem.  Its permission-based security model aims to enforce the Principle of Least Privilege (PoLP), restricting apps to only the permissions it needs.  However, many apps still request excessive permissions, increasing the risk of data leakage and malicious exploitation. Previous research on overprivileged permission has become ineffective due to outdated methods and increasing technical complexity.  The introduction of runtime permissions and scoped storage has made some of the traditional analysis techniques obsolete.  Additionally, developers often are not transparent in explaining the usage of app permissions on the Play Store, misleading users unknowingly and unwillingly granting unnecessary permissions. This combination of overprivilege and poor transparency poses significant security threats to Android users.  Recently, the rise of local large language models (LLMs) has shown promise in various security fields. The main focus of this study is to analyze whether an app is overpriviledged based on app description provided on the Play Store using Local LLM. Finally, we conduct a manual evaluation to validate the LLM’s findings, comparing its results against human-verified response.


Brinley Hull

Mist – An Interactive Virtual Pet for Autism Spectrum Disorder Stress Onset Detection & Mitigation

When & Where:


Nichols Hall, Room 317 (Moore Conference Room)

Committee Members:

Arvin Agah, Chair
Perry Alexander
David Johnson
Sumaiya Shomaji

Abstract

Individuals with Autism Spectrum Disorder (ASD) frequently experience elevated stress and are at higher risk for mood disorders such as anxiety and depression. Sensory over-responsivity, social challenges, and difficulties with emotional recognition and regulation contribute to such heightened stress. This study presents a proof-of-concept system that detects and mitigates stress through interactions with a virtual pet. Designed for young adults with high-functioning autism, and potentially useful for people beyond that group, the system monitors simulated heart rate, skin resistance, body temperature, and environmental sound and light levels. Upon detection of stress or potential triggers, the system alerts the user and offers stress-reduction activities via a virtual pet, including guided deep-breathing exercises and interactive engagement with the virtual companion. Through combining real-time stress detection with interactive interventions on a single platform, the system aims to help autistic individuals recognize and manage stress more effectively.


Past Defense Notices

Dates

Bayn Schrader

Implementation and Analysis of an Efficient Dual-Beam Radar-Communications Technique

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Fully digital arrays enable realization of dual-function radar-communications systems which generate multiple simultaneous transmit beams with different modulation structures in different spatial directions. These spatially diverse transmissions are produced by designing the individual wave forms transmitted at each antenna element that combine in the far-field to synthesize the desired modulations at the specified directions. This thesis derives a look-up table (LUT) implementation of the existing Far-Field Radiated Emissions Design (FFRED) optimization framework. This LUT implementation requires a single optimization routine for a set of desired signals, rather than the previous implementation which required pulse-to-pulse optimization, making the LUT approach more efficient. The LUT is generated by representing the waveforms transmitted by each element in the array as a sequence of beamformers, where the LUT contains beamformers based on the phase difference between the desired signal modulations. The globally optimal beamformers, in terms of power efficiency, can be realized via the Lagrange dual problem for most beam locations and powers. The Phase-Attached Radar-Communications (PARC) waveform is selected for the communications waveform alongside a Linear Frequency Modulated (LFM) waveform for the radar signal. A set of FFRED LUTs are then used to simulate a radar transmission to verify the utility of the radar system. The same LUTs are then used to estimate the communications performance of a system with varying levels of the array knowledge uncertainty.


Will Thomas

Static Analysis and Synthesis of Layered Attestation Protocols

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Trust is a fundamental issue in computer security. Frequently, systems implicitly trust in other

systems, especially if configured by the same administrator. This fallacious reasoning stems from the belief

that systems starting from a known, presumably good, state can be trusted. However, this statement only

holds for boot-time behavior; most non-trivial systems change state over time, and thus runtime behavior is

an important, oft-overlooked aspect of implicit trust in system security.

    To address this, attestation was developed, allowing a system to provide evidence of its runtime behavior to a

verifier. This evidence allows a verifier to make an explicit informed decision about the system’s trustworthiness.

As systems grow more complex, scalable attestation mechanisms become increasingly important. To apply

attestation to non-trivial systems, layered attestation was introduced, allowing attestation of individual

components or layers, combined into a unified report about overall system behavior. This approach enables

more granular trust assessments and facilitates attestation in complex, multi-layered architectures. With the

complexity of layered attestation, discerning whether a given protocol is sufficiently measuring a system, is

executable, or if all measurements are properly reported, becomes increasingly challenging.

    In this work, we will develop a framework for the static analysis and synthesis of layered attestation protocols,

enabling more robust and adaptable attestation mechanisms for dynamic systems. A key focus will be the

static verification of protocol correctness, ensuring the protocol behaves as intended and provides reliable

evidence of the underlying system state. A type system will be added to the Copland layered attestation

protocol description language to allow basic static checks, and extended static analysis techniques will be

developed to verify more complex properties of protocols for a specific target system. Further, protocol

synthesis will be explored, enabling the automatic generation of correct-by-construction protocols tailored to

system requirements.


David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

In high dynamic-range environments, matched-filter radar performance is often sidelobe-limited with correlation error being fundamentally constrained by the TB of the collective emission. To contend with the regulatory necessity of spectral containment, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining responses from distinct pulses from within a pulse-agile emission. In contrast to most complementary subsets, which were discovered via brute force under the notion of phase-coding, these comp-FM waveform subsets achieve CSC while preserving hardware compatibility since they are FM. Although comp-FM addressed a primary limitation of complementary signals (i.e., hardware distortion), CSC hinges on the exact reconstruction of autocorrelation terms to suppress sidelobes, from which optimality is broken for Doppler shifted signals. This work introduces a Doppler-generalized comp-FM (DG-comp-FM) framework that extends the cancellation condition to account for the anticipated unambiguous Doppler span after post-summing. While this framework is developed for use within a combine-before-Doppler processing manner, it can likewise be employed to design an entire coherent processing interval (CPI) to minimize range-sidelobe modulation (RSM) within the radar point-spread-function (PSF), thereby introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori. 

Some radar systems operate with multiple emitters, as in the case of Multiple-input-multiple-output (MIMO) radar. Whereas a single emitter must contend with the self-inflicted autocorrelation sidelobes, MIMO systems must likewise contend with the cross-correlation with coincident (in time and spectrum) emissions from other emitters. As such, the determination of "orthogonal waveforms" comprises a large portion of research within the MIMO space, with a small majority now recognizing that true orthogonality is not possible for band-limited signals (albeit, with the exclusion of TDMA). The notion of complementary-FM is proposed for exploration within a MIMO context, whereby coherently combining responses can achieve CSC as well as cross-correlation cancellation for a wide Doppler space. By effectively minimizing cross-correlation terms, this enables improved channel separation on receive as well as improved estimation capability due to reduced correlation error. Proposal items include further exploration/characterization of the space, incorporating an explicit spectral 


Jigyas Sharma

SEDPD: Sampling-Enhanced Differentially Private Defense against Backdoor Poisoning Attacks of Image Classification

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Han Wang, Chair
Drew Davidson
Dongjie Wang


Abstract

Recent advancements in explainable artificial intelligence (XAI) have brought significant transparency to machine learning by providing interpretable explanations alongside model predictions. However, this transparency has also introduced vulnerabilities, enhancing adversaries’ ability for the model decision processes through explanation-guided attacks. In this paper, we propose a robust, model-agnostic defense framework to mitigate these vulnerabilities by explanations while preserving the utility of XAI. Our framework employs a multinomial sampling approach that perturbs explanation values generated by techniques such as SHAP and LIME. These perturbations ensure differential privacy (DP) bounds, disrupting adversarial attempts to embed malicious triggers while maintaining explanation quality for legitimate users. To validate our defense, we introduce a threat model tailored to image classification tasks. By applying our defense framework, we train models with pixel-sampling strategies that integrate DP guarantees, enhancing robustness against backdoor poisoning attacks with XAI. Extensive experiments on widely used datasets, such as CIFAR-10, MNIST, CIFAR-100 and Imagenette, and models, including ConvMixer and ResNet-50, show that our approach effectively mitigates explanation-guided attacks without compromising the accuracy of the model. We also test our defense performance against other backdoor attacks, which shows our defense framework can detect other type backdoor triggers very well. This work highlights the potential of DP in securing XAI systems and ensures safer deployment of machine learning models in real-world applications.


Dimple Galla

Intelligent Application for Cold Email Generation: Business Outreach

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Cold emailing remains an effective strategy for software service companies to improve organizational reach by acquiring clients. Generic emails often fail to get a response.
This project leverages Generative AI to automate the cold email generation. This project is built with the Llama-3.1 model and a Chroma vector database that supports the semantic search of keywords in the job description that matches the project portfolio links of software service companies. The application automatically extracts the technology related job openings for Fortune 500 companies. Users can either select from these extracted job postings or manually enter URL of a job posting, after which the system generates email and sends email upon approval. Advanced techniques like Chain-of-Thought Prompting and Few-Shot Learning were applied to improve the relevance making the email more responsive. This AI driven approach improves engagement and simplifies the business development process for software service companies.


Shahima Kalluvettu Kuzhikkal

Machine Learning Based Predictive Maintenance for Automotive Systems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni
Hongyang Sun

Abstract

Predictive maintenance plays a central role in reducing vehicle downtime and improving operational efficiency by using data-driven methods to classify the condition of automotive engines. Rather than relying on fixed service schedules or reacting to unexpected breakdowns, this approach leverages machine learning to distinguish between healthy and failed engines based on operational data.

In this project, engine telemetry data capturing key parameters such as engine speed, fuel pressure, and coolant temperature was used to train and evaluate several machine learning models, including logistic regression, random forest, k-nearest neighbors, and a neural network. To further enhance predictive performance, ensemble strategies such as soft voting and stacking were applied. The stacking ensemble, which combines the strengths of multiple classifiers through a meta-learning approach, demonstrated particularly effective results.

This classification-based framework demonstrates how data-driven fault detection can enhance automotive maintenance operations. By identifying engine failures more reliably, machine learning enables safer transportation, reduces maintenance costs, and enhances overall vehicle dependability. Beyond individual vehicles, such approaches have broader applications in fleet management, where proactive decision-making can improve service continuity, reduce operational risks, and increase customer satisfaction.


Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Aidan Schmelzle

Exploration of Human Design with Genetic Algorithms as Artistic Medium for Color Images

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Arvin Agah, Chair
David Johnson
Jennifer Lohoefener


Abstract

Genetic Algorithms (GAs), a subclass of evolutionary algorithms, seek to apply the concept of natural selection to promote the optimization and furtherance of “something” designated by the user. GAs generate a population of chromosomes represented as value strings, score each chromosome with a “fitness function” on a defined set of criteria, and mutate future generations depending on the scores ascribed to each chromosome. In this project, each chromosome is a bitstring representing one canvased color artwork. Artworks are scored with a variety of design fundamentals and user preference. The artworks are then evolved through thousands of generations and the final piece is computationally drawn for analysis. While the rise of gradient-based optimization has resulted in more limited use-cases of GAs, genetic algorithms still have applications in various settings such as hyperparameter tuning, mathematical optimization, reinforcement learning, and black box scenarios. Neural networks are favored presently in image generation due to their pattern recognition and ability to produce new content; however, in cases where a user is seeking to implement their own vision through careful algorithmic refinement, genetic algorithms still find a place in visual computing.


Zara Safaeipour

Task-Aware Communication Computation Co-Design for Wireless Edge AI Systems

When & Where:


Nichols Hall, Room 246

Committee Members:

Morteza Hashemi, Chair
Van Ly Nguyen
Dongjie Wang


Abstract

Wireless edge systems typically need to complete timely computation and inference tasks under strict power, bandwidth, latency, and processing constraints. As AI models and datasets grow in size and complexity, the traditional model of sending all data to a remote cloud or running full inference on edge device becomes impractical. This creates a need for communication-computation co-design to enable efficient AI task processing at the wireless edge. To address this problem, we investigate task-aware communication-computation optimization for two specific problem settings.

First, we explore semantic communication that transmits only the information essential for the receiver’s computation tasks. We propose a semantic-aware and goal-oriented communication method for object detection. Our proposed approach is built upon the auto-encoders, with the encoder and the decoder are respectively implemented at the transmitter and receiver to extract semantic information for the specific computation goal (e.g., object detection). Numerical results show that transmitting only the necessary semantic features significantly improves the overall system efficiency.

Second, we study collaborative inference in wireless edge networks, where energy-constrained devices aim to complete delay-sensitive inference tasks. The inference computation is split between the device and an edge server, thereby achieving collaborative inference. We formulate a utility maximization problem under energy and delay constraints and propose Bayes-Split-Edge, which uses Bayesian optimization to determine the optimal transmission power and neural network split point. The proposed framework introduces a hybrid acquisition function that balances inference task utility, sample efficiency, and constraint violation penalties. We evaluate our approach using the VGG19 model, the ImageNet-Mini dataset, and real-world mMobile wireless channel datasets.

Overall, this research is aimed at developing efficient edge AI systems by incorporating the underlying wireless communications limitations and challenges into AI tasks processing.