Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

George Steven Muvva

Automated Fake Content Detection Using TF-IDF-Based Machine Learning and LSTM-Driven Deep Learning Models

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid spread of misinformation across online platforms has made automated fake news detection essential. This project develops and compares machine learning (SVM, Decision Tree) and deep learning (LSTM) models to classify news headlines from the GossipCop and PolitiFact datasets as real or fake. After extensive preprocessing— including text cleaning, lemmatization, TF-IDF vectorization, and sequence tokenization—the models are trained and evaluated using standard performance metrics. Results show that SVM provides a strong baseline, but the LSTM model achieves higher accuracy and F1-scores by capturing deeper semantic and contextual patterns in the text. The study highlights the challenges of domain variation and subtle linguistic cues, while demonstrating that context-aware deep learning methods offer superior capability for automated fake content detection.


Babak Badnava

Joint Communication and Computation for Emerging Applications in Next-Generation Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Emerging applications in next-generation wireless networks, such as augmented and virtual reality (AR/VR) and autonomous vehicles, demand significant computational and communication resources at the network edge. This PhD research focuses on developing joint communication–computation solutions while incorporating various network-, application-, and user-imposed constraints. In the first thrust, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user, multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision process (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution in which edge users offload their computation tasks to nearby edge servers over selected communication channels. 

The second thrust focuses on user-driven requirements for resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of millimeter-wave (mmWave) networks, we develop a beam alignment and buffer-predictive multi-user scheduling algorithm for 2D video streaming applications. This algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across multiple users. We then extend our investigation to develop a joint rate adaptation and computation distribution framework for 3D video streaming in mmWave-based VR systems. Numerical results using real-world mmWave traces and 3D video datasets demonstrate significant improvements in video quality, rebuffering time, and quality variations perceived by users.

Finally, we develop novel edge computing solutions for multi-layer immersive video processing systems. By exploring and exploiting the elastic nature of computation tasks in these systems, we propose a multi-agent reinforcement learning (MARL) framework that incorporates two learning-based methods: the centralized phasic policy gradient (CPPG) and the independent phasic policy gradient (IPPG). IPPG leverages shared information and model parameters to learn edge offloading policies; however, during execution, each user acts independently based only on its local state information. This decentralized execution reduces the communication and computation overhead of centralized decision-making and improves scalability. We leverage real-world 4G, 5G, and WiGig network traces, along with 3D video datasets, to investigate the performance trade-offs of CPPG and IPPG when applied to elastic task computing.


Sri Dakshayani Guntupalli

Customer Churn Prediction for Subscription-Based Businesses

When & Where:


LEEP2, Room 2420

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

Customer churn is a critical challenge for subscription-based businesses, as it directly impacts revenue, profitability, and long-term customer loyalty. Because retaining existing customers is more cost-effective than acquiring new ones, accurate churn prediction is essential for sustainable growth. This work presents a machine learning based framework for predicting and analyzing customer churn, coupled with an interactive Streamlit web application that supports real time decision making. Using historical customer data that includes demographic attributes, usage behavior, transaction history, and engagement patterns, the system applies extensive data preprocessing and feature engineering to construct a modeling-ready dataset. Multiple models Logistic Regression, Random Forest, and XGBoost are trained and evaluated using the Scikit-Learn framework. Model performance is assessed with metrics such as accuracy, precision, recall, F1-score, and ROC-AUC to identify the most effective predictor of churn. The top performing models are serialized and deployed within a Streamlit interface that accepts individual customer inputs or batch data files to generate immediate churn predictions and summaries. Overall, this project demonstrates how machine learning can transform raw customer data into actionable business intelligence and provides a scalable approach to proactive customer retention management.


QiTao Weng

Anytime Computer Vision for Autonomous Driving

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Heechul Yun, Chair
Drew Davidson
Shawn Keshmiri


Abstract

Latency–accuracy tradeoffs are fundamental in real-time applications of deep neural networks (DNNs) for cyber-physical systems. In autonomous driving, in particular, safety depends on both prediction quality and the end-to-end delay from sensing to actuation. We observe that (1) when latency is accounted for, the latency-optimal network configuration varies with scene context and compute availability; and (2) a single fixed-resolution model becomes suboptimal as conditions change.

We present a multi-resolution, end-to-end deep neural network for the CARLA urban driving challenge using monocular camera input. Our approach employs a convolutional neural network (CNN) that supports multiple input resolutions through per-resolution batch normalization, enabling runtime selection of an ideal input scale under a latency budget, as well as resolution retargeting, which allows multi-resolution training without access to the original training dataset.

We implement and evaluate our multi-resolution end-to-end CNN in CARLA to explore the latency–safety frontier. Results show consistent improvements in per-route safety metrics—lane invasions, red-light infractions, and collisions—relative to fixed-resolution baselines.


Sherwan Jalal Abdullah

A Versatile and Programmable UAV Platform for Integrated Terrestrial and Non-Terrestrial Network Measurements in Rural Areas

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Shawn Keshmiri


Abstract

Reliable cellular connectivity is essential for modern services such as telehealth, precision agriculture, and remote education; yet, measuring network performance in rural areas presents significant challenges. Traditional drive testing cannot access large geographic areas between roads, while crowdsourced data provides insufficient spatial resolution in low-population regions. To address these limitations, we develop an open-source UAV-based measurement platform that integrates an onboard computation unit, commercial cellular modem, and automated flight control to systematically capture Radio Access Network (RAN) signals and end-to-end network performance metrics at different altitudes. Our platform collects synchronized measurements of signal strength (RSRP, RSSI), signal quality (RSRQ, SINR), latency, and bidirectional throughput, with each measurement tagged with GPS coordinates and altitude. Experimental results from a semi-rural deployment reveal a fundamental altitude-dependent trade-off: received signal power improves at higher altitudes due to enhanced line-of-sight conditions, while signal quality degrades from increased interference with neighboring cells. Our analysis indicates that most of the measurement area maintains acceptable signal quality, along with adequate throughput performance, for both uplink and downlink communications. We further demonstrate that strong radio signal metrics for individual cells do not necessarily translate to spatial coverage dominance such that the cell serving the majority of our test area exhibited only moderate performance, while cells with superior metrics contributed minimally to overall coverage. Next, we develop several machine learning (ML) models to improve the prediction accuracy of signal strength at unmeasured altitudes. Finally, we extend our measurement platform by integrating non-terrestrial network (NTN) user terminals with the UAV components to investigate the performance of Low-earth Orbit (LEO) satellite networks with UAV mobility. Our measurement results demonstrate that NTN offers a viable fallback option by achieving acceptable latency and throughput performance during flight operations. Overall, this work establishes a reproducible methodology for three-dimensional rural network characterization and provides practical insights for network operators, regulators, and researchers addressing connectivity challenges in underserved areas.


Satya Ashok Dowluri

Comparison of Copy-and-Patch and Meta-Tracing Compilation techniques in the context of Python

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hossein Saiedian


Abstract

Python's dynamic nature makes performance enhancement challenging. Recently, a JIT compiler using a novel copy-and-patch compilation approach was implemented in the reference Python implementation, CPython. Our goal in this work is to study and understand the performance properties of CPython's new JIT compiler. To facilitate this study, we compare the quality and performance of the code generated by this new JIT compiler with a more mature and traditional meta-tracing based JIT compiler implemented in PyPy (another Python implementation). Our thorough experimental evaluation reveals that, while it achieves the goal of fast compilation speed, CPython's JIT severely lags in code quality/performance compared with PyPy. While this observation is a known and intentional property of the copy-and-patch approach, it results in the new JIT compiler failing to elevate Python code performance beyond that achieved by the default interpreter, despite significant added code complexity. In this thesis, we report and explain our novel experiments, results, and observations.


Arya Hadizadeh Moghaddam

Learning Personalized and Robust Patient Representations across Graphical and Temporal Structures in Electronic Health Records

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Bo Luo
Fengjun Li
Dongjie Wang
Xinmai Yang

Abstract

Recent research in Electronic Health Records (EHRs) has enabled personalized and longitudinal modeling of patient trajectories for health outcome improvement. Despite this progress, existing methods often struggle to capture the dynamic, heterogeneous, and interdependent nature of medical data. Specifically, many representation methods learn a rich set of EHR features in an independent way but overlook the intricate relationships among them. Moreover, data scarcity and bias, such as the cold-start scenarios where patients only have a few visits or rare conditions, remain fundamental challenges in clinical decision support in real-life. To address these challenges, this dissertation aims to introduce an integrated machine learning framework for sophisticated, interpretable, and adaptive EHR representation modeling. Specifically, the dissertation comprises three thrusts:

  1. A time-aware graph transformer model that dynamically constructs personalized temporal graph representations that capture patient trajectory over different visits.

  2. A contrasted multi-Intent recommender system that can disentangle the multiple temporal patterns that coexist in a patient’s long medical history, while considering distinct health profiles.

  3. A few-shot meta-learning framework that can address the patient cold-start issue through a self- and peer-adaptive model enhanced by uncertainty-based filtering.

Together, these contributions advance a data-efficient, generalizable, and interpretable foundation for large-scale clinical EHR mining toward truly personalized medical outcome prediction.


Junyi Zhao

On the Security of Speech-based Machine Translation Systems: Vulnerabilities and Attacks

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Bo Luo, Chair
Fengjun Li
Zijun Yao


Abstract

In the light of rapid advancement of global connectivity and the increasing reliance on multilingual communication, speech-based Machine Translation (MT) systems have emerged as essential technologies for facilitating seamless cross-lingual interaction. These systems enable individuals and organizations to overcome linguistic boundaries by automatically translating spoken language in real time. However, despite their growing ubiquity in various applications such as virtual assistants, international conferencing, and accessibility services, the security and robustness of speech-based MT systems remain underexplored. In particular, limited attention has been given to understanding their vulnerabilities under adversarial conditions, where malicious actors intentionally craft or manipulate speech inputs to mislead or degrade translation performance.

This thesis presents a comprehensive investigation into the security landscape of speech-based machine translation systems from an adversarial perspective. We systematically categorize and analyze potential attack vectors, evaluate their success rates across diverse system architectures and environmental settings, and explore the practical implications of such attacks. Furthermore, through a series of controlled experiments and human-subject evaluations, we demonstrate that adversarial manipulations can significantly distort translation outputs in realistic use cases, thereby posing tangible risks to communication reliability and user trust.

Our findings reveal critical weaknesses in current MT models and underscore the urgent need for developing more resilient defense strategies. We also discuss open research challenges and propose directions for building secure, trustworthy, and ethically responsible speech translation technologies. Ultimately, this work contributes to a deeper understanding of adversarial robustness in multimodal language systems and provides a foundation for advancing the security of next-generation machine translation frameworks.


Kyrian C. Adimora

Machine Learning-Based Multi-Objective Optimization for HPC Workload Scheduling: A GNN-RL Approach

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni
Zijun Yao
Michael J. Murray

Abstract

As high-performance computing (HPC) systems achieve exascale capabilities, traditional single-objective schedulers that optimize solely for performance prove inadequate for environments requiring simultaneous optimization of energy efficiency and system resilience. Current scheduling approaches result in suboptimal resource utilization, excessive energy consumption, and reduced fault tolerance in the demanding requirements of large-scale scientific applications. This dissertation proposes a novel multi-objective optimization framework that integrates graph neural networks (GNNs) with reinforcement learning (RL) to jointly optimize performance, energy efficiency, and system resilience in HPC workload scheduling. The central hypothesis posits that graph-structured representations of workloads and system states, combined with adaptive learning policies, can significantly outperform traditional scheduling methods in complex, dynamic HPC environments. The proposed framework comprises three integrated components: (1) GNN-RL, which combines graph neural networks with reinforcement learning for adaptive policy development; (2) EA-GATSched, an energy-aware scheduler leveraging Graph Attention Networks; and (3) HARMONIC (Holistic Adaptive Resource Management for Optimized Next-generation Interconnected Computing), a probabilistic model for workload uncertainty quantification. The proposed methodology encompasses novel uncertainty modeling techniques, scalable GNN-based scheduling algorithms, and comprehensive empirical evaluation using production supercomputing workload traces. Preliminary results demonstrate 10-19% improvements in energy efficiency while maintaining comparable performance metrics. The framework will be evaluated across makespan reduction, energy consumption, resource utilization efficiency, and fault tolerance in various operational scenarios. This research advances sustainable and resilient HPC resource management, providing critical infrastructure support for next-generation scientific computing applications.


Sarah Johnson

Ordering Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Michael Branicky
Sankha Guria
Emily Witt
Eileen Nutting

Abstract

Remote attestation is a process of obtaining verifiable evidence from a remote party to establish trust. A relying party makes a request of a remote target that responds by executing an attestation protocol producing evidence reflecting the target's system state and meta-evidence reflecting the evidence’s integrity and provenance. This process occurs in the presence of adversaries intent on misleading the relying party to trust a target they should not. This research introduces a robust approach for evaluating and comparing attestation protocols based on their relative resilience against such adversaries. I develop a Rocq-based, formally-verified mathematical model aimed at describing the difficulty for an active adversary to successfully compromise the attestation. The model supports systematically ranking attestation protocols by the level of adversary effort required to produce evidence that does not accurately reflect the target’s state. My work aims to facilitate the selection of a protocol resilient to adversarial attack.