Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Arnab Mukherjee

Attention-Based Solutions for Occlusion Challenges in Person Tracking

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Sumaiya Shomaji
Hongyang Sun
Jian Li

Abstract

Person re-identification (Re-ID) and multi-object tracking in unconstrained surveillance environments pose significant challenges within the field of computer vision. These complexities stem mainly from occlusion, variability in appearance, and identity switching across various camera views. This research outlines a comprehensive and innovative agenda aimed at tackling these issues, employing a series of increasingly advanced deep learning architectures, culminating in a groundbreaking occlusion-aware Vision Transformer framework.

At the heart of this work is the introduction of Deep SORT with Multiple Inputs (Deep SORT-MI), a cutting-edge real-time Re-ID system featuring a dual-metric association strategy. This strategy adeptly combines Mahalanobis distance for motion-based tracking with cosine similarity for appearance-based re-identification. As a result, this method significantly decreases identity switching compared to the baseline SORT algorithm on the MOT-16 benchmark, thereby establishing a robust foundation for metric learning in subsequent research.

Expanding on this foundation, a novel pose-estimation framework integrates 2D skeletal keypoint features extracted via OpenPose directly into the association pipeline. By capturing the spatial relationships among body joints along with appearance features, this system enhances robustness against posture variations and partial occlusion. Consequently, it achieves substantial reductions in false positives and identity switches compared to earlier methods, showcasing its practical viability.

Furthermore, a Diverse Detector Integration (DDI) study meticulously assessed the influence of detector choices—including YOLO v4, Faster R-CNN, MobileNet SSD v2, and Deep SORT—on the efficacy of metric learning-based tracking. The results reveal that YOLO v4 consistently delivers exceptional tracking accuracy on both the MOT-16 and MOT-17 datasets, establishing its superiority in this competitive landscape.

In conclusion, this body of research notably advances occlusion-aware person Re-ID by illustrating a clear progression from metric learning to pose-guided feature extraction and ultimately to transformer-based global attention modeling. The findings underscore that lightweight, meticulously parameterized Vision Transformers can achieve impressive generalization for occlusion detection, even under constrained data scenarios. This opens up exciting prospects for integrated detection, localization, and re-identification in real-world surveillance systems, promising to enhance their effectiveness and reliability.


Bretton Scarbrough

Structured Light for Particle Manipulation: Hologram Generation and Optical Binding Simulation

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shima Fardad, Chair
Rongqing Hui
Alessandro Salandrino


Abstract

This thesis addresses two related problems in the optical manipulation of microscopic particles: the efficient generation of holograms for holographic optical tweezers and the simulation of multi-particle optical binding. Holographic optical tweezers use phase-only spatial light modulators to create programmable optical trapping fields, enabling dynamic control over the number, position, and relative strength of optical traps. Because the quality of the trapping field depends strongly on the computed hologram, the first part of this work focuses on improving hologram-generation methods used in these systems.

A new phase-induced compressive sensing algorithm is presented for holographic optical tweezers, along with weighted and unweighted variants. These methods are developed from the Gerchberg-Saxton framework and are designed to improve computational efficiency while preserving favorable trapping characteristics such as uniformity and optical efficiency. By combining compressive sensing with phase induction, the proposed algorithms reduce the computational burden associated with iterative hologram generation while maintaining strong performance across a variety of trapping arrangements. Comparative simulations are used to evaluate these methods against several established hologram-generation algorithms, and the results show that the proposed approaches offer meaningful improvements in convergence behavior and overall performance.

The second part of this thesis examines optical binding, a phenomenon in which multiple particles interact through both the incident optical field and the fields scattered by neighboring particles. To study this process, a numerical simulation is developed that incorporates gradient forces, radiation pressure, and light-mediated particle-particle interactions in both two- and three-dimensional configurations. The simulation is used to investigate how particles evolve under different initial conditions and illumination states, and how collective effects influence the formation of stable or semi-stable arrangements. These results provide insight into the role of scattering-mediated forces in many-particle optical systems and highlight differences between two-dimensional and three-dimensional behavior.

Although hologram generation and optical binding are treated as separate problems in this work, they are connected by a common goal: understanding how structured optical fields can be designed and applied to control microscopic matter. Together, the results of this thesis contribute to the broader study of computational beam shaping and many-body optical interactions, with relevance to advanced optical trapping, particle organization, and dynamically reconfigurable light-driven systems.


Sai Rithvik Gundla

Beyond Regression Accuracy: Evaluating Runtime Prediction for Scheduling Input Sensitive Workloads

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Arvin Agah
David Johnson


Abstract

Runtime estimation plays a structural role in reservation-based scheduling for High Performance Computing (HPC) systems, where predicted walltimes directly influence reservation timing, backfilling feasibility, and overall queue dynamics. This raises a fundamental question of whether improved runtime prediction accuracy necessarily translates into improved scheduling performance. In this work, we conduct an empirical study of runtime estimation under EASY Backfilling using an application-driven workload consisting of MRI-based brain segmentation jobs. Despite identical configurations and uniform metadata, runtimes exhibit substantial variability driven by intrinsic input structure. To capture this variability, we develop a feature-driven machine learning (ML) framework that extracts region-wise features from MRI volumes to predict job runtimes without relying on historical execution traces or scheduling metadata. We integrate these ML-derived predictions into an EASY Backfilling scheduler implemented in the Batsim simulation framework. Our results show that regression accuracy alone does not determine scheduling performance. Instead, scheduling performance depends strongly on estimation bias and its effect on reservation timing and runtime exceedances. In particular, mild multiplicative calibration of ML-based runtime estimates stabilizes scheduler behavior and yields consistently competitive performance across workload and system configurations. Comparable performance can also be observed with certain levels of uniform overestimation; however, calibrated ML predictions provide a systematic mechanism to control estimation bias without relying on arbitrary static inflation. In contrast, underestimation consistently leads to severe performance degradation and cascading job terminations. These findings highlight runtime estimation as a structural control input in backfilling-based HPC scheduling and demonstrate the importance of evaluating prediction models jointly with scheduling dynamics rather than through regression metrics alone.


Ye Wang

Toward Practical and Stealthy Sensor Exploitation: Physical, Contextual, and Control-Plane Attack Paradigms

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Rongqing Hui
Bo Luo
Haiyang Chao

Abstract

Modern intelligent systems increasingly rely on continuous sensor data streams for perception, decision-making, and control, making sensors a critical yet underexplored attack surface. While prior research has demonstrated the feasibility of sensor-based attacks, recent advances in mobile operating systems and machine learning-based defenses have significantly reduced their practicality, rendering them more detectable, resource-intensive, and constrained by evolving permission and context-aware security models.

This dissertation revisits sensor exploitation under these modern constraints and develops a unified, cross-layer perspective that improves both practicality and stealth of sensor-enabled attacks. We identify three fundamental challenges: (i) the difficulty of reliably manipulating physical sensor signals in noisy, real-world environments; (ii) the effectiveness of context-aware defenses in detecting anomalous sensor behavior on mobile devices, and (iii) the lack of lightweight coordination for practical sensor-based side- and covert-channels.

To address the first challenge, we propose a physical-domain attack framework that integrates signal modeling, simulation-guided attack synthesis, and real-time adaptive targeting, enabling robust adversarial perturbations with high attack success rates even under environmental uncertainty. As a case study, we demonstrate an infrared laser-based adversarial example attack against face recognition systems, which achieves consistently high success rates across diverse conditions with practical execution overhead.

To improve attack stealth against context-aware defenses, we introduce an auto-contextualization mechanism that synchronizes malicious sensor actuation with legitimate application activity. By aligning injected signals with both statistical patterns and semantic context of benign behavior, the approach renders attacks indistinguishable from normal system operations and benign sensor usage. We validate this design using three Android logic bombs, showing that auto-contextualized triggers can evade both rule-based and learning-based detection mechanisms.

Finally, we extend sensor exploitation beyond the traditional attack-channel plane by introducing a lightweight control-plane protocol embedded within sensor data streams. This protocol encodes control signals directly into sensor observations and leverages simple signal-processing primitives to coordinate multi-stage attacks without relying on privileged APls or explicit inter-process communication. The resulting design enables low-overhead, stealthy coordination of cross-device side- and covert-channels.

Together, these contributions establish a new paradigm for sensor exploitation that spans physical, contextual, and control-plane dimensions. By bridging these layers, this dissertation demonstrates that sensor-based attacks remain not only feasible but also practical and stealthy in modern computer systems.


Jamison Bond

Mutual Coupling Array Calibration Utilizing Decomposition of Modeled Scattering Matrix

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Carl Leuschen


Abstract

***Currently being reviewed, unavailable***


Kevin Likcani

Use of Machine Learning to Predict Drug Court Success

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Substance use remains a major public health issue in the United States that significantly impacts individuals, families, and society. Many individuals who suffer from substance use disorder (SUD) face incarceration due to drug-related offenses. Drug courts have emerged as an alternative to imprisonment and offer the opportunity for individuals to participate in a drug rehabilitation program instead. Drug courts mainly focus on those with non-violent drug-related offenses. One of the challenges of decision making in drug courts is assessing the likelihood of participants graduating from the drug court and avoiding recidivism after graduation. This study investigates the use of machine learning models to predict success in drug courts using data from a substance use drug court in Missouri. Success is measured in terms of graduation from the program, and the model includes a wide range of potential predictors including demographic characteristics, family and social factors, substance use history, legal involvement, physical and mental health history, employment history as well as drug court participation data. The results will be beneficial to drug court teams and presiding judges in predicting client success, evaluating risk factors during treatment for participants, informing person-centered treatment planning, and the development of after-care plans for high-risk participants to reduce the likelihood of recidivism. 


Peter Tso

Implementation of Free-Space Optical Networks based on Resonant Semiconductor Saturable Absorber and Phase Light Modulator

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad


Abstract

Optical Neural Networks (ONNs) have gained traction as an alternative to the conventional computing architectures used in modern CPUs and GPUs, largely because light enables massive parallelism, ultrafast inference, and minimal power consumption. 

As with conventional deep neural networks (DNNs), free-space ONNs require two main layers: (1) a nonlinear activation function which exists to separate adjacent linear layers, and (2) weighting layers that applies a linear transformation given an input.

Firstly, a Resonant Semiconductor Saturable Absorption Mirror (RSAM) was investigated as a viable nonlinear activation function. Several mechanisms have been used to create nonlinear activation functions, such as cold atoms, vapor absorption cells, and polaritons, but these implementations are bulky and must operate under tightly controlled environments while RSAMs is a passive device. Compared to typical SESAMs, the resonance structure of RSAM also reduces the saturation fluence compared to non-resonant SAMs, allowing low power laser sources to be used. A fiber-based optical testbed demonstrated notable improvement of 8.1% in classification accuracy compared to a linear only network trained with the MNIST dataset.

Secondly, Micro-electromechanical-system-based phase light modulators (PLMs) were evaluated as an alternative to LC-SLMs for in-situ reinforcement learning. PLMs can operate at kilohertz-scale frame rates at a substantially lower cost compared to LC-SLMs but have lower phase resolution and non-uniform quantization which impacts fidelity. Despite these disadvantages, the high-speed nature of PLMs allows for significant decrease in optimization time, which not only allows for reduction in training time, but also allows for larger datasets and more complex models with more learnable parameters. A single layer optical network was implemented using policy-based learning with discrete action-space to minimize impact of quantization. The testbed achieves 90.1%, 79.7%, and 76.9% training, validation, and test accuracy, respectively, on 3,000 images from the MNIST dataset. Additionally, we achieved 79.9%, 72.1%, and 71.7% accuracy on 3,000 images from the Fashion MNIST dataset. At 14 minutes per epoch during training, it is at least a magnitude lower in training time compared to LC-SLMs based models.


Joseph Vinduska

Fault-Frequency Agnostic Checkpointing Strategies

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Arvin Agah
Drew Davidson


Abstract

Checkpointing strategies in high-performance computing traditionally employ the Young-Daly for-

mula to determine the (first-order) optimal duration between checkpoints, which assumes a known

mean time between faults (MTBF). However, in practice, the MTBF may not be known accurately

or may vary, causing Young-Daly checkpointing to perform sub-optimally. In 2021, Sigdel et al.

introduced the CHORE (CHeckpointing Overhead and Rework Equated) checkpointing strategy,

which is MTBF-agnostic yet demonstrates a bounded increase in overhead compared to the op-

timal strategy. This thesis analyzed and extends the CHORE framework in several ways. First,

it verifies Sigdel et al.’s claims about the relative overhead of the CHORE strategy through both

event-driven simulations and expected runtimes derived from the underlying probablistic model.

Second, it extends the CHORE strategy to silent errors, which must be deliberately checked for to

be detected. In this scenario, the overhead compared to optimal checkpointing is once more ana-

lyzed through simulations and expected runtimes. Third, a heuristic is proposed to offer improved

performance of the CHORE algorithm under typical runtime scenarios by interpreting CHORE as

an additive-increase multiplicative-decrease model and tuning the parameters.


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Devin Setiawan

Concept-Driven Interpretability in Graph Neural Networks: Applications in Neuroscientific Connectomics and Clinical Motor Analysis

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Sankha Guria
Han Wang


Abstract

Graph Neural Networks (GNNs) achieve state-of-the-art performance in modeling complex biological and behavioral systems, yet their "black-box" nature limits their utility for scientific discovery and clinical translation. Standard post-hoc explainability methods typically attribute importance to low-level features, such as individual nodes or edges, which often fail to map onto the high-level, domain-specific concepts utilized by experts. To address this gap, this thesis explores diverse methodological strategies for achieving Concept-Level Interpretability in GNNs, demonstrating how deep learning models can be structurally and analytically aligned with expert domain knowledge. This theme is explored through two distinct methodological paradigms applied to critical challenges in neuroscience and clinical psychology. First, we introduce an interpretable-by-design approach for modeling brain structure-function coupling. By employing an ensemble of GNNs conceptually biased via input graph filtering, the model enforces verifiably disentangled node embeddings. This allows for the quantitative testing of specific structural hypotheses, revealing that a minority of strong anatomical connections disproportionately drives functional connectivity predictions. Second, we present a post-hoc conceptual alignment paradigm for quantifying atypical motor signatures in Autism Spectrum Disorder (ASD). Utilizing a Spatio-Temporal Graph Autoencoder (STGCN-AE) trained on normative skeletal data, we establish an unsupervised anomaly detection system. To provide clinical interpretability, the model's reconstruction error is systematically aligned with a library of human-interpretable kinematic features, such as postural sway and limb jerk. Explanatory meta-modeling via XGBoost and SHAP analysis further translates this abstract loss into a multidimensional clinical signature. Together, these applications demonstrate that integrating concept-level interpretability through either architectural design or systematic post-hoc alignment enables GNNs to serve as robust tools for hypothesis testing and clinical assessment.


Mahmudul Hasan

Trust Assurance of Commercial Off-The-Shelf (COTS) Hardware Through Verification and Runtime Resilience

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Tamzidul Hoque, Chair
Esam El-Araby
Prasad Kulkarni
Hongyang Sun
Huijeong Kim

Abstract

The adoption of Commercial off-the-shelf (COTS) components has become a dominant paradigm in modern system design due to their reduced development cost, faster time-to-market, and widespread availability. However, the reliance on globally distributed and untrusted supply chains introduces significant security risks, particularly the possibility of malicious hardware modifications such as Trojans, embedded during design or fabrication. In such settings, traditional methods that depend on golden models, full design visibility, or trusted fabrication are no longer sufficient, creating the need for new security assurance approaches under a zero-trust model. This proposed research addresses security challenges in COTS microprocessors through two complementary solutions: runtime resilience and pre-deployment trust verification. First, a multi-variant-execution-based framework is developed that leverages functionally equivalent program variants to induce diverse microarchitectural execution patterns. By comparing intermediate outputs across variants, the framework enables runtime detection and tolerance of Trojan induced payload effects without requiring hardware redundancy or architectural modifications. To enhance the effectiveness of variant generation, a reinforcement learning assisted framework is introduced, in which the reward function is defined by security objectives rather than traditional performance optimization, enabling the generation of variants that are more robust against repeated Trojan activation. Second, to enable black-box trust verification prior to deployment, this work presents a framework that can efficiently test the presence of hardware Trojans by identifying microarchitectural rare events and transferring activation knowledge from existing processor designs to trigger highly susceptible internal nodes. By leveraging ISA-level knowledge, open-source RTL references, and LLM-guided test generation, the framework achieves high trigger coverage without requiring access to proprietary designs or golden references. Building on these two scenarios, a future research direction is outlined for evolving trust in COTS hardware through continuous runtime observation, where multi-variant execution is extended with lightweight monitoring mechanisms that capture key microarchitectural events and execution traces. These observations are accumulated as hardware trust counters, enabling the system to progressively establish confidence in the underlying hardware by verifying consistent behavior across diverse execution patterns over time. Together, these directions establish a foundation for analyzing and mitigating security risks across zero-trust COTS supply chains.


Moh Absar Rahman

Permissions vs Promises: Assessing Over-privileged Android Apps via Local LLM-based Description Validation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Sankha Guria
David Johnson


Abstract

Android is the most widely adopted mobile operating system, supporting billions of devices and driven by a robust app ecosystem.  Its permission-based security model aims to enforce the Principle of Least Privilege (PoLP), restricting apps to only the permissions it needs.  However, many apps still request excessive permissions, increasing the risk of data leakage and malicious exploitation. Previous research on overprivileged permission has become ineffective due to outdated methods and increasing technical complexity.  The introduction of runtime permissions and scoped storage has made some of the traditional analysis techniques obsolete.  Additionally, developers often are not transparent in explaining the usage of app permissions on the Play Store, misleading users unknowingly and unwillingly granting unnecessary permissions. This combination of overprivilege and poor transparency poses significant security threats to Android users.  Recently, the rise of local large language models (LLMs) has shown promise in various security fields. The main focus of this study is to analyze whether an app is overpriviledged based on app description provided on the Play Store using Local LLM. Finally, we conduct a manual evaluation to validate the LLM’s findings, comparing its results against human-verified response.


Mohsen Nayebi Kerdabadi

Representation Augmentation for Electronic Health Records via Knowledge Graphs, Large Language Models, and Contrastive Learning

When & Where:


Learned Hall, Room 3150

Committee Members:

Zijun Yao, Chair
Sumaiya Shomaji
Hongyang Sun
Dongjie Wang
Shawn Keshmiri

Abstract

Electronic Health Records (EHRs) provide rich longitudinal patient information, but their high dimensionality, sparsity, heterogeneity, and temporal complexity make robust representation learning difficult. This dissertation studies how to improve patient and medical concept representation learning in EHRs and consequently enhance healthcare predictive tasks by integrating domain knowledge, knowledge graphs, large language models (LLMs), and contrastive learning. First, it introduces an ontology-aware temporal contrastive framework for survival analysis that learns discriminative patient representations from censored and observed trajectories by modeling temporal distinctiveness in longitudinal EHR data. Second, it proposes a multi-ontology representation learning framework that jointly propagates knowledge within and across diagnosis, medication, and procedure ontologies, enabling richer medical concept embeddings, especially under limited data and for rare conditions. Third, it develops an LLM-enriched, text-attributed medical knowledge graph framework that combines EHR-derived statistical evidence with type-constrained LLM reasoning to infer semantic relations, generate contextual node and edge descriptions, and co-learn concept embeddings through joint language-model and graph-neural-network training. Together, these studies advance a unified view of EHR representation learning in which structured medical knowledge, textual semantics, and temporal patient trajectories are jointly leveraged to build more accurate, interpretable, and robust healthcare prediction models.


Brinley Hull

Mist – An Interactive Virtual Pet for Autism Spectrum Disorder Stress Onset Detection & Mitigation

When & Where:


Nichols Hall, Room 317 (Moore Conference Room)

Committee Members:

Arvin Agah, Chair
Perry Alexander
David Johnson
Sumaiya Shomaji

Abstract

Individuals with Autism Spectrum Disorder (ASD) frequently experience elevated stress and are at higher risk for mood disorders such as anxiety and depression. Sensory over-responsivity, social challenges, and difficulties with emotional recognition and regulation contribute to such heightened stress. This study presents a proof-of-concept system that detects and mitigates stress through interactions with a virtual pet. Designed for young adults with high-functioning autism, and potentially useful for people beyond that group, the system monitors simulated heart rate, skin resistance, body temperature, and environmental sound and light levels. Upon detection of stress or potential triggers, the system alerts the user and offers stress-reduction activities via a virtual pet, including guided deep-breathing exercises and interactive engagement with the virtual companion. Through combining real-time stress detection with interactive interventions on a single platform, the system aims to help autistic individuals recognize and manage stress more effectively.


Harun Khan

Identifying Weight Surgery Attacks in Siamese Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Alex Bardas
Bo Luo


Abstract

Facial recognition systems increasingly rely on machine learning services, yet they remain vulnerable to cyber-attacks. While traditional adversarial attacks target input data, an underexplored threat comes from weight manipulation attacks, which directly modify model parameters and can compromise deployed systems in cyber-physical settings. This paper investigates defenses against Weight Surgery, a weight manipulation attack that modifies the final linear layer of neural networks to merge or shatter classes without requiring access to training data. We propose a computationally lightweight defense capable of detecting sample pairs affected by Weight Surgery at low false-positive rates. The defense is designed to operate in realistic deployment scenarios, selecting its sensitivity parameter 𝛾 using only benign samples to meet a target false-positive rate. Evaluation on 1000 independently attacked models demonstrates that our method achieves over 95% recall at a target false-positive rate of 0.001. Performance remains strong even under stricter conditions: at FPR = 0.0001, recall is 92.5%, and at 𝛾=0.98, FPR drops to 0.00001 while maintaining 88.9% recall. These results highlight the robustness and practicality of the defense, offering an effective safeguard for neural networks against model-targeted attacks.


Tanvir Hossain

Security Solutions for Zero-Trust Microelectronics Supply Chains

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Tamzidul Hoque, Chair
Drew Davidson
Prasad Kulkarni
Heechul Yun
Huijeong Kim

Abstract

Microelectronics supply chains increasingly rely on globally distributed design, fabrication, integration, and deployment processes, making traditional assumptions of trusted hardware inadequate. Security in this setting can be understood through a zero-trust microelectronics supply-chain model, in which neither manufacturing partners nor procured hardware platforms are assumed trustworthy by default. Two complementary threat scenarios are considered in the proposed research. In the first scenario, custom Integrated Circuits (ICs) fabricated through potentially untrusted foundries are examined, where design-for-security protections intended to prevent piracy, overproduction, and intellectual-property theft can themselves become vulnerable to attacks. In this scenario, hardware Trojan-assisted meta-attacks are used to show that such protections can be systematically identified and subverted by fabrication-stage adversaries. In the second scenario, commercial off-the-shelf ICs are considered from the perspective of end users and procurers, where internal design visibility is unavailable and hardware trustworthiness cannot be directly verified. For this setting, runtime-oriented protection mechanisms are developed to safeguard sensitive computation against malicious hardware behavior and side-channel leakage. Building on these two scenarios, a future research direction is outlined for side-channel-driven vulnerability discovery in off-the-shelf devices, motivated by the need to evaluate and test such platforms prior to deployment when no design information is available. The proposed direction explores gray-box security evaluation using power and electromagnetic side-channel analysis to identify anomalous behaviors and potential vulnerabilities in opaque hardware platforms. Together, these directions establish a foundation for analyzing and mitigating security risks across zero-trust microelectronics supply chains.


Past Defense Notices

Dates

Krishna Chaitanya Reddy Chitta

A Dynamic Resource Management Framework and Reconfiguration Strategies for Cloud-native Bulk Synchronous Parallel Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Sumaiya Shomaji


Abstract

Many High Performance Computing (HPC) applications following the Bulk Synchronous Parallel

(BSP) model are increasingly deployed in cloud-native, multi-tenant container environments such

as Kubernetes. Unlike dedicated HPC clusters, these shared platforms introduce resource virtualization

and variability, making BSP applications more susceptible to performance fluctuations.

Workload imbalance across supersteps can trigger the straggler effect, where faster tasks wait

at synchronization barriers for slower ones, increasing overall execution time. Existing BSP resource

management approaches typically assume static workloads and reuse a single configuration

throughout execution. However, real-world workloads vary due to dynamic data and system conditions,

making static configurations suboptimal. This limitation underscores the need for adaptive

resource management strategies that respond to workload changes while considering reconfiguration

costs.

 

To address these limitations, we evaluate a dynamic, data-driven resource management framework

tailored for cloud-native BSP applications. This approach integrates workload profiling,

time-series forecasting, and predictive performance modeling to estimate task execution behavior

under varying workload and resource conditions. The framework explicitly models the trade-off

between performance gains achieved through reconfiguration and the associated checkpointing

and migration costs incurred during container reallocation. Multiple reconfiguration strategies

are evaluated, spanning simple window-based heuristics, dynamic programming methods, and

reinforcement learning approaches. Through extensive experimental evaluation, this framework

demonstrates up to 24.5% improvement in total execution time compared to a baseline static configuration.

Furthermore, we systematically analyze the performance of each strategy under varying

workload characteristics, simulation lengths, and checkpoint penalties, and provide guidance on

selecting the most appropriate strategy for a given workload environment.


Smriti Pranjal

NoBIAS: Non-coding RNA Base Interaction Annotation using Visual Snapshot

When & Where:


Slawson Hall, Room 198

Committee Members:

Cuncong Zhong, Chair
Sumaiya Shomaji
Hongyang Sun
Zijun Yao
Xiaoqing Wu

Abstract

Non-coding RNAs fold into complex 3D structures that govern their biological functions, with RNA structural motifs (RSMs) serving as conserved building blocks of this architecture.
These motifs are defined by characteristic base-interaction patterns, making accurate identification and classification of RNA interactions essential for understanding RNA structure and function.

Despite their biological importance, accurately identifying and classifying these interactions remains challenging because the available data are highly variable in quality and scarce in quantity. This compromises annotation reliability, hinders the construction of trustworthy ground truth for systematic assessment, and restricts the supply of reliable training examples needed for supervised learning.

To address this, we introduce NoBIAS, the first resolution-aware, integrated machine learning-based suite for annotating base interactions from 3D RNA structures, inspired by human pattern recognition, augmented with structure prediction for data enrichment, and evaluated on a carefully curated, stratified benchmark.

NoBIAS is a hierarchical framework for RNA base-interaction annotation that integrates interaction-specific inductive biases with multimodal representation learning. By combining a convolution-augmented, rule-guided module for stacking interactions with complementary graph and image encoders for pairing interactions, NoBIAS captures both structural priors and local visual cues of RNA base doublets. A performance-calibrated logit fusion scheme then adaptively integrates modality-specific predictions based on local-structural resolution, enabling robust inference across heterogeneous 3D RNA structures.

Evaluation across multiple benchmark tiers: spanning consensus, homolog-supported, and manually verified cases, shows that NoBIAS consistently outperforms existing methods under increasingly challenging conditions. Together, the NoBIAS design and its evaluation framework provide a systematic foundation for robust RNA base-interaction annotation, enabling more reliable analysis of RNA structure under realistic uncertainty.


Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Fairuz Shadmani Shishir

Toward Trustworthy Biomedical AI: Efficient Protein Language Models and Privacy-Aware Clinical Representations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Cuncong Zhong
Bishnu Sarker
Michael Hageman

Abstract

Accurate biological sequence annotation and privacy-aware clinical modeling are central challenges in modern computational biology and biomedical AI. This dissertation presents scalable and interpretable deep learning frameworks spanning protein family classification, metal-ion binding prediction, and privacy-preserving electrocardiogram (ECG) representation learning. First, we introduce GPCR-SLM, a lightweight transformer-based framework for high-resolution classification of G-protein coupled receptors (GPCRs), one of the largest and most pharmacologically important protein families, targeted by approximately 35% of FDA-approved drugs. Unlike traditional homology-based tools such as BLAST and HMMER, which struggle to distinguish closely related families with low sequence similarity, our knowledge-distilled small language model achieves 99% accuracy across 86 GPCR families. The framework significantly outperforms BLAST (86.4%) and HMMER (91%) while delivering a 33.5× computational speedup compared to large protein language models, enabling scalable functional annotation as protein databases continue to expand. 

Second, we present an end-to-end deep learning pipeline for protein–metal-ion binding prediction. Binding site annotation is traditionally labor-intensive and limited by handcrafted features or predefined residue sets. We systematically evaluate five state-of-the-art protein language models and incorporate positional encoding to capture long-range residue dependencies. Our approach achieves a Matthews Correlation Coefficient (MCC) of 0.89 with precision, recall, and F1 scores exceeding 95% for six major metal ions under 10-fold cross-validation, demonstrating robust predictive performance and improved biological interpretability. Finally, we address fairness and privacy in clinical AI through a variational autoencoder (VAE) framework for ECG representation learning. Because ECGs inherently encode sensitive soft biometrics such as sex, age, and race, we design a dual-discriminator architecture that suppresses demographic information while preserving clinically relevant signals. The reconstructed ECGs substantially reduce demographic identifiability while maintaining strong predictive performance for reduced left ventricular ejection fraction, left ventricular hypertrophy, and 5-year mortality. 

Collectively, this work advances parameter-efficient, scalable, and privacy-conscious deep learning methodologies for both molecular and clinical domains, bridging computational protein science and trustworthy biomedical AI. 


Shailesh Pandey

Vision-Based Motor Assessment in Autism: Deep Learning Methods for Detection, Classification, and Tracking

When & Where:


Zoom defense, please email jgrisafe@ku.edu for defense information

Committee Members:

Sumaiya Shomaji, Chair
Shima Fardad
Zijun Yao
Cuncong Zhong
Lisa Dieker

Abstract

Motor difficulties show up in as many as 90% of people with autism, but surprisingly few, somewhere between 13% and 32%, ever get motor-focused help. A big part of the problem is that the tools we have for measuring motor skills either rely on a clinician's subjective judgment or require expensive lab equipment that most families will never have access to. This dissertation tries to close that gap with three projects, all built around the idea that a regular webcam and some well-designed deep learning models can do much of what costly motion-capture labs do today.

The first project asks a straightforward question: can a computer tell the difference between how someone with autism moves and how a typically developing person moves, just by watching a short video? The answer, it turns out, is yes. We built an ensemble of three neural networks, each one tuned to notice something different. One focuses on how joints coordinate with each other spatially, other zeroes in on the timing of movements, and the third learns which body-part relationships matter most for a given clip. We tested the system on 582 videos from 118 people (69 with ASD and 49 without) performing simple everyday actions like stirring or hammering. The ensemble correctly classifies 95.65% of cases. The timing-focused model on its own hits 92%, which is nearly 10 points better than a standard recurrent network baseline. And when all three models agree, accuracy climbs above 98%.

The second project deals with stimming, the repetitive behaviors like arm flapping, head banging, and spinning that are common in autism. Working with 302 publicly available videos, we trained a skeleton-based model that reaches 91% accuracy using body pose alone. That is more than double the 47% that previous work managed on the same benchmark. When we combine the pose information with what the raw video shows through a late fusion approach, accuracy jumps to 99.9%. Across the entire test set, only a single video was misclassified.

The third project is E-MotionSpec, a web platform designed for clinicians and researchers who want to track motor development over time. It runs in any browser, uses MediaPipe to estimate body pose in real time, and extracts 44 movement features grouped into seven domains covering things like how smoothly someone moves, how quickly they initiate actions, and how coordinated their limbs are. We validated the platform on the same 118-participant dataset and found 36 features with statistically significant differences between the ASD and typically developing groups. Smoothness and initiation timing stood out as the strongest discriminators. The platform also includes tools for comparing sessions over time using frequency analysis and dynamic time warping, so a clinician can actually see whether someone's motor patterns are changing across weeks or months.

Taken together, these three projects offer a practical path toward earlier identification and better ongoing monitoring of motor difficulties in autism. Everything runs on a webcam and a web browser. No motion-capture suits, no force plates, no specialized labs. That matters most for the families, schools, and clinics that need these tools the most and can least afford the alternatives.


Md Abu Saeed

Comparative Analysis of Deep Learning Models for Guava Leaf Disease Diagnosis

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
David Johnson
Hongyang Sun


Abstract

Guava leaf diseases significantly affect crop yield and quality, making timely detection essential for effective disease management. This project presents an end-to-end software system for automated guava leaf disease detection using deep learning and transfer learning techniques. Multiple pretrained convolutional neural network (CNN) architectures, including ResNet, AlexNet, VGG, SqueezeNet, DenseNet, Inception-v3, and EfficientNet, were adapted through feature extraction and trained on a guava leaf image dataset.

The system allows users to either capture an image using a camera or upload an existing leaf image through a software interface. The input image is processed and classified by the trained deep learning model, and the predicted disease class is displayed to the user. The dataset was divided into training, validation, and test sets to ensure robust performance evaluation, and final test accuracy was used to measure generalization on unseen data.

Experimental results demonstrate that transfer learning enables accurate and efficient guava leaf disease classification. Among the evaluated models, the best-performing architecture achieved an accuracy between 97% to 99%. Overall, the developed software provides a practical and user-friendly solution for real-world agricultural disease diagnosis.


Zhaohui Wang

Detection and Mitigation of Cross-App Privacy Leakage and Interaction Threats in IoT Automation

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to everyday life, enabling users to deploy automation rules and develop IoT apps tailored to their specific needs. However, modern IoT ecosystems consist of numerous devices, applications, and platforms that interact continuously. As a result, users are increasingly exposed to complex and subtle security and privacy risks that are difficult to fully comprehend. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats. In addition, violations of memory integrity can undermine the security guarantees on which IoT apps rely.

The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app interaction chains formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate risk levels based on inferences. In addition, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks.

The second approach addresses cross-app interaction threats in IoT automation systems by leveraging a logic-based analysis model grounded in event relations. We formalize event relationships, detect event interferences, and classify rule conflicts, then generate risk scores and conflict rankings to enable comprehensive conflict detection and risk assessment. To mitigate the identified interaction threats, an optimization-based approach is employed to reduce risks while preserving system functionality. This approach ensures comprehensive coverage of cross-app interaction threats and provides a robust solution for detecting and resolving rule conflicts in IoT environments.

To support the development and rigorous evaluation of these security analyses, we further developed a large-scale, manually verified, and comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. All proposed approaches are evaluated using this dataset of real-world apps, collectively offering valuable insights and practical tools for enhancing IoT security and privacy against cross-app threats. Furthermore, we examine the integrity of the execution environment that supports IoT apps. We show that, even under non-privileged execution, carefully crafted memory access patterns can induce bit flips in physical memory, allowing attackers to corrupt data and compromise system integrity without requiring elevated privileges.


Shawn Robertson

A Low-Power Low-Throughput Communications Solution for At-Risk Populations in Resource Constrained Contested Environments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

In resource‑constrained contested environments (RCCEs), communications are routinely censored, surveilled, or disrupted by nation‑state adversaries, leaving at‑risk populations—including protesters, dissidents, disaster‑affected communities, and military units—without secure connectivity. This dissertation introduces MeshBLanket, a Bluetooth Mesh‑based framework designed for low‑power, low‑throughput messaging with minimal electromagnetic spectrum exposure. Built on commercial off‑the‑shelf hardware, MeshBLanket extends the Bluetooth Mesh specification with automated provisioning and network‑wide key refresh to enhance scalability and resilience.

We evaluated MeshBLanket through field experimentation (range, throughput, battery life, and security enhancements) and qualitative interviews with ten senior U.S. Army communications experts. Thematic analysis revealed priorities of availability, EMS footprint reduction, and simplicity of use, alongside adoption challenges and institutional skepticism. Results demonstrate that MeshBLanket maintains secure messaging under load, supports autonomous key refresh, and offers operational relevance at the forward edge of battlefields.

Beyond military contexts, parallels with protest environments highlight MeshBLanket’s broader applicability for civilian populations facing censorship and surveillance. By unifying technical experimentation with expert perspectives, this work contributes a proof‑of‑concept communications architecture that advances secure, resilient, and user‑centric connectivity in environments where traditional infrastructure is compromised or weaponized.


Shravan Kaundinya

Design, Development, And Deployment of Airborne and Ground-Based High-Power, UHF Radars With Multichannel, Polarimetric Antenna Arrays for Radioglaciology

When & Where:


Nichols Hall, Room 317 (Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Rachel Jarvis
John Paden
Jim Stiles
Richard Hale

Abstract

This work describes the building and deployment of airborne and ground-based high-power, UHF radars from a systems engineering perspective. Its primary focus is on the design and development of compact, low-profile, polarimetric antenna arrays for these radars using a rapid prototyping methodology. The overarching goal of this effort is to aid the Center for Oldest Ice Exploration (COLDEX), a multi-institution collaboration to explore Antarctica using airborne and ground radars for the identification of a drill site to retrieve the oldest possible continuous ice record.  A multichannel  600 – 900 MHz, pulsed frequency modulated (FM) radar with up to 1.6 kW of peak output power per channel is designed and implemented. The ground-based frontend is a 16-element antenna array power-combined into a single channel per polarization in a sled platform. The airborne frontend has a 64-element fuselage-mounted antenna array power-combined into 16 independent channels and two 12-element wing arrays power-combined into 6 channels for operation on a Basler aircraft.

Three major design revisions of the antenna element design are presented. The first two design revisions of the dual-polarized, microstrip dipole antenna have the typical vertically integrated aperture-coupled microstrip baluns. The third and newly proposed design is a near-planar, integrated feed which combines a 2-sided microstrip balun board (one balun for each polarization) and a custom 6-layer balanced-to-balanced feed board. A microstrip matching network 2-layer board with two order-4 LC-filters is directly connected using micro-coaxial (MCX) connectors. The total antenna height of the proposed design is reduced by nearly one-third relative to the first two design revisions while improving electrical performance.

A novel methodology for efficient wideband tuning of the active impedance of the elements of an antenna array using lumped components is demonstrated. The goal of the method is to achieve >10 dB active return loss with a single order-4 LC-circuit for all four  power-combined channels of the 16-element antenna array with minimal iteration loops. It combines the simulation and measurement spaces at different stages to account for platform scattering, mutual coupling, and non-ideal behavior of the lumped components and circuit board parasitic effects in the UHF range.

Each antenna array design is fed using 1:2 and 1:4 microstrip, Wilkinson high-power dividers. Two major design revisions of the high-power divider are presented. The first design has three implementations: ground-based, airborne fuselage-mounted, and airborne wing-mounted. It uses a 100-ohm flange resistor under the requirements of fire safety in the case of all transmitted power reflected from the antenna port. Two drawbacks of the flange design feature are high parasitic capacitance (which results in sub-optimal performance) and large profile. The second and newly proposed design uses chemical vapor deposition (CVD) diamond resistors on a custom copper flange. The resistors are wire-bonded between the resistor’s gold contacts and soft gold pads on the circuit board using 25 µm gold wire. Results for an ideal prototype and the first implemented version on a ground-based array are presented. System engineering aspects such as thermal cycling, high-power RF tests, and bond integrity are explored.

The effectiveness of the circuits developed in the context of this work is demonstrated in real field environments. This includes the operation of the airborne version of the UHF multichannel radar for surveys near Dome A in Antarctica during the 2022 – 2023 and 2023 – 2024 Austral summer seasons, the five-fold deployment of the ground-based versions of the UHF multielement radar  for surveys in Greenland and Antarctica from 2022 to 2024, and the operation of the newly proposed version to Taylor Dome in Antarctica during the 2025 Austral summer season, currently underway.