Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 129 (Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

**Currently under security review**


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Pramil Paudel

Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless Imaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao

Abstract

Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference. 

We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks. 


Sharmila Raisa

Digital Coherent Optical System: Investigation and Monitoring

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han

Abstract

Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.

We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.

After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.

In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.

Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.


Arman Ghasemi

Task-Oriented Data Communication and Compression for Timely Forecasting and Control in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alexandru Bardas
Prasad Kulkarni
Taejoon Kim
Zsolt Talata

Abstract

Advances in sensing, communication, and intelligent control have transformed power systems into data-driven smart grids, where forecasting and intelligent decision-making are essential components. Modern smart grids include distributed energy resources (DERs), renewable generation, battery energy storage systems, and large numbers of grid-edge devices that continuously generate time-series data. At the same time, increasing renewable penetration introduces substantial uncertainty in generation, net load, and market operations, while communication networks impose bandwidth, latency, and reliability constraints on timely data delivery. This dissertation addresses how time-series forecasting, data compression, and task-oriented wireless communication can be jointly designed for smart grid applications.

First, we study weather-aware distributed energy management in prosumer-centric microgrids and show that incorporating day-ahead weather information into decision-making improves battery dispatch and reduces the impact of renewable uncertainty. Second, we introduce forecasting-aware energy management in both wholesale and retail electricity markets, highlighting how renewable generation forecasting affects pricing, scheduling, and uncertainty mitigation. Third, we develop and evaluate deep learning methods for renewable generation forecasting, showing that Transformer-based models outperform recurrent baselines such as RNN and LSTM for wind and solar prediction tasks.

Building on this forecasting foundation, we develop a communication-efficient forecasting framework in which high-dimensional smart grid measurements are compressed into low-dimensional latent representations before transmission. This framework is extended into a task-oriented communication system that jointly optimizes data relevance and information timeliness, so that the receiver obtains compressed updates that remain useful for downstream forecasting tasks. Finally, we extend this framework to a distributed multi-node uplink setting, where multiple grid sensors share a bandwidth-limited channel, and develop scheduling policy that improves both the timeliness and task-relevance of received updates.


Pardaz Banu Mohammad

Towards Early Detection of Alzheimer’s Disease based on Speech using Reinforcement Learning Feature Selection

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Arvin Agah, Chair
David Johnson
Sumaiya Shomaji
Dongjie Wang
Sara Wilson

Abstract

Alzheimer’s Disease (AD) is a progressive, irreversible neurodegenerative disorder and the leading cause of dementia worldwide, affecting an estimated 55 million people globally. The window of opportunity for intervention is demonstrably narrow, making reliable early-stage detection a clinical and scientific imperative. While current diagnostic techniques such as neuroimaging and cerebrospinal fluid (CSF) biomarkers carry well-defined limitations in scalability, cost, and access equity, speech has emerged as a compelling non-invasive proxy for cognitive function evaluation.

This work presents a novel approach for using acoustic feature selection as a decision-making technique and implements it using deep reinforcement learning. Specifically, we use a Deep-Q-Network (DQN) agent to navigate a high dimensional feature space of over 6,000 acoustic features extracted using the openSMILE toolkit, dynamically constructing maximally discriminative and non-redundant features subsets. In order to capture the latent structural dependencies among

acoustic features which classifier and wrapper methods have difficulty to model, we introduce the Graph Convolutional Network (GCN) based correlation awareness feature representation layer that operates as an auxiliary input to the DQN state encoder. Post selection interpretability is reinforced through TF-IDF weighting and K-means clustering which together yield both feature level and cluster level explanations that are clinically actionable. The framework is evaluated across five classifiers, namely, support vector machines (SVM), logistic regression, XGBoost, random forest, and feedforward neural network. We use 10-fold stratified cross-validation on established benchmarks of datasets, including DementiaBank Pitt Corpus, Ivanova, and ADReSS challenge data. The proposed approach is benchmarked against state-of-the-art feature selection methods such as LASSO, Recursive feature selection, and mutual information selectors. This research contributes to three primary intellectual advances: (1) a graph augmented state representation that encodes inter-feature relational structure within a reinforcement learning agent, (2) a clinically interpretable pipeline that bridges the gap between algorithmic performance and translational utility, and (3) multilingual data approach for the reinforcement learning agent framework. This study has direct implications for equitable, low-cost and scalable AD screening in both clinical and community settings.


Zhou Ni

Bridging Federated Learning and Wireless Networks: From Adaptive Learning to FLdriven System Optimization

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Fengjun Li
Van Ly Nguyen
Han Wang
Shawn Keshmiri

Abstract

Federated learning (FL) has emerged as a promising distributed machine learning
framework that enables multiple devices to collaboratively train models without sharing raw
data, thereby preserving privacy and reducing the need for centralized data collection. However,
deploying FL in practical wireless environments introduces two major challenges. First, the data
generated across distributed devices are often heterogeneous and non-IID, which makes a single
global model insufficient for many users. Second, learning performance in wireless systems is
strongly affected by communication constraints such as interference, unreliable channels, and
dynamic resource availability. This PhD research aims to address these challenges by bridging
FL methods and wireless networks.
In the first thrust, we develop personalized and adaptive FL methods given the underlying
wireless link conditions. To this end, we propose channel-aware neighbor selection and
similarity-aware aggregation in wireless device-to-device (D2D) learning environments. We
further investigate the impacts of partial model update reception on FL performance. The
overarching goal of the first thrust is to enhance FL performance under wireless constraints.
Next, we investigate the opposite direction and raise the question: How can FL-based distributed
optimization be used for the design of next-generation wireless systems? To this end, we
investigate communication-aware participation optimization in vehicular networks, where
wireless resource allocation affects the number of clients that can successfully contribute to FL.
We further extend this direction to integrated sensing and communication (ISAC) systems,
where personalized FL (PFL) is used to support distributed beamforming optimization with joint
sensing and communication objectives.
Overall, this research establishes a unified framework for bridging FL and wireless networks. As
a future direction, this work will be extended to more realistic ISAC settings with dynamic
spectrum access, where communication, sensing, scheduling, and learning performance must be
considered jointly.


Arnab Mukherjee

Attention-Based Solutions for Occlusion Challenges in Person Tracking

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Sumaiya Shomaji
Hongyang Sun
Jian Li

Abstract

Person re-identification (Re-ID) and multi-object tracking in unconstrained surveillance environments pose significant challenges within the field of computer vision. These complexities stem mainly from occlusion, variability in appearance, and identity switching across various camera views. This research outlines a comprehensive and innovative agenda aimed at tackling these issues, employing a series of increasingly advanced deep learning architectures, culminating in a groundbreaking occlusion-aware Vision Transformer framework.

At the heart of this work is the introduction of Deep SORT with Multiple Inputs (Deep SORT-MI), a cutting-edge real-time Re-ID system featuring a dual-metric association strategy. This strategy adeptly combines Mahalanobis distance for motion-based tracking with cosine similarity for appearance-based re-identification. As a result, this method significantly decreases identity switching compared to the baseline SORT algorithm on the MOT-16 benchmark, thereby establishing a robust foundation for metric learning in subsequent research.

Expanding on this foundation, a novel pose-estimation framework integrates 2D skeletal keypoint features extracted via OpenPose directly into the association pipeline. By capturing the spatial relationships among body joints along with appearance features, this system enhances robustness against posture variations and partial occlusion. Consequently, it achieves substantial reductions in false positives and identity switches compared to earlier methods, showcasing its practical viability.

Furthermore, a Diverse Detector Integration (DDI) study meticulously assessed the influence of detector choices—including YOLO v4, Faster R-CNN, MobileNet SSD v2, and Deep SORT—on the efficacy of metric learning-based tracking. The results reveal that YOLO v4 consistently delivers exceptional tracking accuracy on both the MOT-16 and MOT-17 datasets, establishing its superiority in this competitive landscape.

In conclusion, this body of research notably advances occlusion-aware person Re-ID by illustrating a clear progression from metric learning to pose-guided feature extraction and ultimately to transformer-based global attention modeling. The findings underscore that lightweight, meticulously parameterized Vision Transformers can achieve impressive generalization for occlusion detection, even under constrained data scenarios. This opens up exciting prospects for integrated detection, localization, and re-identification in real-world surveillance systems, promising to enhance their effectiveness and reliability.


Sai Katari

Android Malware Detection System

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Arvin Agah
Prasad Kulkarni


Abstract

Android malware remains a significant threat to mobile security, requiring efficient and scalable detection methods. This project presents an Android Malware Detection System that uses machine learning to classify applications as benign or malicious based on static permission-based analysis. The system is trained on the TUANDROMD dataset of 4,464 applications using four models-Logistic Regression, XGBoost, Random Forest, and Naive Bayes-with a 75/25 train/test split and 5-fold cross-validation on the training set for evaluation. To improve reliability, the system incorporates a hybrid decision approach that combines machine learning confidence scores with a rule-based static analysis engine, using a three-zone confidence routing mechanism to capture threats that ML alone may miss. The solution is deployed as a Flask web application with both a manual detection interface and an APK file scanner, providing predictions, confidence scores, and risk insights, ultimately supporting more informed and secure decision-making.


Ertewaa Saud Alsahayan

Toward Reliable LLM-Assisted Design Space Exploration under Performance, Cost, and Dependability Constraints

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Tamzidul Hoque, Chair
Prasad Kulkarni
Sumaiya Shomaji
Hongyang Sun
Huijeong Kim

Abstract

Architectural design space exploration (DSE) requires navigating large configuration spaces while satisfying multiple conflicting objectives, including performance, cost, and system dependability. Large language models (LLMs) have shown promise in assisting DSE by proposing candidate designs and interpreting simulation feedback. However, extending LLM-based DSE to realistic multi-objective settings introduces structural challenges. A naive multi-objective extension of prior LLM-based DSE approaches, which we term Co-Pilot2, exhibits reasoning instability, candidate degeneration, feasibility violations, and lack of progressive improvement. These limitations arise not from insufficient model capacity, but from the absence of structured control, verification, and decision integrity within the exploration process. 

To address these challenges, this research introduces REMODEL, a structured LLM-controlled DSE framework that transforms free-form reasoning into a constrained, verifiable, and iterative optimization process. REMODEL incorporates candidate pooling across parallel reasoning instances, strict state isolation via history snapshotting, deterministic feasibility verification, canonical design representation and deduplication, explicit decision stages, and structured reasoning to enforce complete parameter coverage and consistent trend analysis. These mechanisms enable reliable and stable exploration under complex multi-objective constraints. 

To support dependability-aware evaluation, the framework is integrated with cycle-accurate simulation using gem5 and its reliability-focused extension GemV, enabling detailed analysis of performance, power, and fault tolerance through vulnerability metrics. This integration allows the system to reason not only about performance–cost trade-offs, but also about reliability-aware design decisions under realistic execution conditions. 

Experimental evaluation demonstrates that REMODEL identifies near-optimal designs within a small number of simulations, achieving significantly higher solution quality per simulation compared to baseline methods such as random search and genetic algorithms, while maintaining low computational overhead. 

This work establishes a foundation for dependable LLM-assisted DSE by incorporating reliability constraints into the exploration loop. As a future direction, this framework will be extended to incorporate security-aware design considerations, enabling unified reasoning over performance, cost, reliability, and system security. 


Bretton Scarbrough

Structured Light for Particle Manipulation: Hologram Generation and Optical Binding Simulation

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shima Fardad, Chair
Rongqing Hui
Alessandro Salandrino


Abstract

This thesis addresses two related problems in the optical manipulation of microscopic particles: the efficient generation of holograms for holographic optical tweezers and the simulation of multi-particle optical binding. Holographic optical tweezers use phase-only spatial light modulators to create programmable optical trapping fields, enabling dynamic control over the number, position, and relative strength of optical traps. Because the quality of the trapping field depends strongly on the computed hologram, the first part of this work focuses on improving hologram-generation methods used in these systems.

A new phase-induced compressive sensing algorithm is presented for holographic optical tweezers, along with weighted and unweighted variants. These methods are developed from the Gerchberg-Saxton framework and are designed to improve computational efficiency while preserving favorable trapping characteristics such as uniformity and optical efficiency. By combining compressive sensing with phase induction, the proposed algorithms reduce the computational burden associated with iterative hologram generation while maintaining strong performance across a variety of trapping arrangements. Comparative simulations are used to evaluate these methods against several established hologram-generation algorithms, and the results show that the proposed approaches offer meaningful improvements in convergence behavior and overall performance.

The second part of this thesis examines optical binding, a phenomenon in which multiple particles interact through both the incident optical field and the fields scattered by neighboring particles. To study this process, a numerical simulation is developed that incorporates gradient forces, radiation pressure, and light-mediated particle-particle interactions in both two- and three-dimensional configurations. The simulation is used to investigate how particles evolve under different initial conditions and illumination states, and how collective effects influence the formation of stable or semi-stable arrangements. These results provide insight into the role of scattering-mediated forces in many-particle optical systems and highlight differences between two-dimensional and three-dimensional behavior.

Although hologram generation and optical binding are treated as separate problems in this work, they are connected by a common goal: understanding how structured optical fields can be designed and applied to control microscopic matter. Together, the results of this thesis contribute to the broader study of computational beam shaping and many-body optical interactions, with relevance to advanced optical trapping, particle organization, and dynamically reconfigurable light-driven systems.


Sai Rithvik Gundla

Beyond Regression Accuracy: Evaluating Runtime Prediction for Scheduling Input Sensitive Workloads

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Arvin Agah
David Johnson


Abstract

Runtime estimation plays a structural role in reservation-based scheduling for High Performance Computing (HPC) systems, where predicted walltimes directly influence reservation timing, backfilling feasibility, and overall queue dynamics. This raises a fundamental question of whether improved runtime prediction accuracy necessarily translates into improved scheduling performance. In this work, we conduct an empirical study of runtime estimation under EASY Backfilling using an application-driven workload consisting of MRI-based brain segmentation jobs. Despite identical configurations and uniform metadata, runtimes exhibit substantial variability driven by intrinsic input structure. To capture this variability, we develop a feature-driven machine learning (ML) framework that extracts region-wise features from MRI volumes to predict job runtimes without relying on historical execution traces or scheduling metadata. We integrate these ML-derived predictions into an EASY Backfilling scheduler implemented in the Batsim simulation framework. Our results show that regression accuracy alone does not determine scheduling performance. Instead, scheduling performance depends strongly on estimation bias and its effect on reservation timing and runtime exceedances. In particular, mild multiplicative calibration of ML-based runtime estimates stabilizes scheduler behavior and yields consistently competitive performance across workload and system configurations. Comparable performance can also be observed with certain levels of uniform overestimation; however, calibrated ML predictions provide a systematic mechanism to control estimation bias without relying on arbitrary static inflation. In contrast, underestimation consistently leads to severe performance degradation and cascading job terminations. These findings highlight runtime estimation as a structural control input in backfilling-based HPC scheduling and demonstrate the importance of evaluating prediction models jointly with scheduling dynamics rather than through regression metrics alone.


Pavan Sai Reddy Pendry

BabyJay - A RAG Based Chatbot for the University of Kansas

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The University of Kansas maintains hundreds of departmental and unit websites, leaving students without a unified way to find information. General-purpose chatbots hallucinate KU-specific facts, and static FAQ pages cannot hold a conversation. This work presents BabyJay, a Retrieval-Augmented Generation chatbot that answers student questions using content scraped from official KU sources, with inline citations on every response. The pipeline combines query preprocessing and decomposition, an intent classifier that routes most queries to fast JSON lookups, hybrid retrieval (BM25 and ChromaDB vector search merged via Reciprocal Rank Fusion), a cross-encoder re-ranker, and generation by Claude Sonnet 4.6 under a context-only system prompt. Evaluation on 46 question-answer pairs across five difficulty tiers and eight domains produced a composite score of 0.72, entity precision of 93%, and zero runtime errors. Retrieval, rather than generation, emerged as the primary bottleneck, motivating future work on multi-domain query handling.


Ye Wang

Toward Practical and Stealthy Sensor Exploitation: Physical, Contextual, and Control-Plane Attack Paradigms

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Rongqing Hui
Bo Luo
Haiyang Chao

Abstract

Modern intelligent systems increasingly rely on continuous sensor data streams for perception, decision-making, and control, making sensors a critical yet underexplored attack surface. While prior research has demonstrated the feasibility of sensor-based attacks, recent advances in mobile operating systems and machine learning-based defenses have significantly reduced their practicality, rendering them more detectable, resource-intensive, and constrained by evolving permission and context-aware security models.

This dissertation revisits sensor exploitation under these modern constraints and develops a unified, cross-layer perspective that improves both practicality and stealth of sensor-enabled attacks. We identify three fundamental challenges: (i) the difficulty of reliably manipulating physical sensor signals in noisy, real-world environments; (ii) the effectiveness of context-aware defenses in detecting anomalous sensor behavior on mobile devices, and (iii) the lack of lightweight coordination for practical sensor-based side- and covert-channels.

To address the first challenge, we propose a physical-domain attack framework that integrates signal modeling, simulation-guided attack synthesis, and real-time adaptive targeting, enabling robust adversarial perturbations with high attack success rates even under environmental uncertainty. As a case study, we demonstrate an infrared laser-based adversarial example attack against face recognition systems, which achieves consistently high success rates across diverse conditions with practical execution overhead.

To improve attack stealth against context-aware defenses, we introduce an auto-contextualization mechanism that synchronizes malicious sensor actuation with legitimate application activity. By aligning injected signals with both statistical patterns and semantic context of benign behavior, the approach renders attacks indistinguishable from normal system operations and benign sensor usage. We validate this design using three Android logic bombs, showing that auto-contextualized triggers can evade both rule-based and learning-based detection mechanisms.

Finally, we extend sensor exploitation beyond the traditional attack-channel plane by introducing a lightweight control-plane protocol embedded within sensor data streams. This protocol encodes control signals directly into sensor observations and leverages simple signal-processing primitives to coordinate multi-stage attacks without relying on privileged APls or explicit inter-process communication. The resulting design enables low-overhead, stealthy coordination of cross-device side- and covert-channels.

Together, these contributions establish a new paradigm for sensor exploitation that spans physical, contextual, and control-plane dimensions. By bridging these layers, this dissertation demonstrates that sensor-based attacks remain not only feasible but also practical and stealthy in modern computer systems.


Jamison Bond

Mutual Coupling Array Calibration Utilizing Decomposition of Modeled Scattering Matrix

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Carl Leuschen


Abstract

Modern phased-array antenna calibration is essential for advanced radar systems to achieve precise beamforming, sidelobe control, and coherent processing. While mutual coupling-based calibration provides a valuable internal alternative to external far-field references by exploiting near-field element interactions, the problem is fundamentally ill-posed. Measured responses depend simultaneously on transmit coefficients, receive coefficients, and the coupling matrix, making it difficult to isolate true channel errors from array-model mismatch without additional structure.

This thesis presents a Bayesian Maximum A Posteriori (MAP) calibration framework that resolves this ambiguity by embedding physically motivated prior information into the estimation problem. The nominal coupling matrix is decomposed into Infinite, Symmetric, and Reciprocal components, which define low-dimensional parameterizations and prior covariance models. A Maximum Likelihood (ML) stage first generates a data-consistent transceiver initialization, followed by a MAP estimator that refines the solution by jointly addressing structured coupling deviations and measurement uncertainty.

Evaluations using Computational Electromagnetic (CEM) models and measured WaDES array data reveal that the physical array contains more higher-order structural content than the nominal CEM model. Across Monte Carlo trials, highly structured MAP estimators generally achieve lower aggregate error than unconstrained ML and Log Least Squares (LLS) methods. The overlapping-subspace M family offers an optimal balance of structural flexibility, zero-centered phase and magnitude behavior, and tuning robustness. Additionally, parametric sweeps highlight that prior covariance scaling is a critical design parameter: tight reciprocal priors prevent spurious structural absorption, whereas overly loose priors allow model mismatch to contaminate transceiver estimates.

Ultimately, this work demonstrates that internal mutual coupling calibration can achieve autonomy and robustness against model mismatch by parameterizing the nominal coupling matrix into structured components and integrating them as Bayesian priors.


Kevin Likcani

Use of Machine Learning to Predict Drug Court Success

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Substance use remains a major public health issue in the United States that significantly impacts individuals, families, and society. Many individuals who suffer from substance use disorder (SUD) face incarceration due to drug-related offenses. Drug courts have emerged as an alternative to imprisonment and offer the opportunity for individuals to participate in a drug rehabilitation program instead. Drug courts mainly focus on those with non-violent drug-related offenses. One of the challenges of decision making in drug courts is assessing the likelihood of participants graduating from the drug court and avoiding recidivism after graduation. This study investigates the use of machine learning models to predict success in drug courts using data from a substance use drug court in Missouri. Success is measured in terms of graduation from the program, and the model includes a wide range of potential predictors including demographic characteristics, family and social factors, substance use history, legal involvement, physical and mental health history, employment history as well as drug court participation data. The results will be beneficial to drug court teams and presiding judges in predicting client success, evaluating risk factors during treatment for participants, informing person-centered treatment planning, and the development of after-care plans for high-risk participants to reduce the likelihood of recidivism. 


Past Defense Notices

Dates

Sai Karthik Maddirala

Real-Estate Price Analysis and Prediction Using Ensemble Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Morteza Hashemi
Prasad Kulkarni


Abstract

Accurate real-estate price estimation is crucial for buyers, sellers, investors, lenders, and policymakers, yet traditional valuation practices often rely on subjective judgment, inconsistent expertise, and incomplete market information. With the increasing availability of digital property listings, large volumes of structured real-estate data can now be leveraged to build objective, data-driven valuation systems. This project develops a comprehensive analytical framework for predicting different types of properties prices using real-world listing data collected from 99acres.com across major Indian cities. The workflow includes automated web scraping, extensive data cleaning, normalization of heterogeneous property attributes, and exploratory data analysis to identify important pricing patterns and structural trends within the dataset. A multi-stage learning pipeline is designed—consisting of feature preparation, hyperparameter tuning, cross-validation, and performance evaluation—to ensure that the final predictive system is both reliable and generalizable. In addition to the core prediction engine, the project proposes a future extension using Retrieval-Augmented Generation (RAG) with Large Language Models(LLM’s) to provide transparent, context-aware explanations for each valuation. Overall, this work establishes the foundation for a scalable, interpretable, and data-centric real-estate valuation platform capable of supporting informed decision-making in diverse market contexts.


Ramya Harshitha Bolla

AI Academic Assistant for Summarization and Question Answering

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid expansion of academic literature has made efficient information extraction increasingly difficult for researchers, leading to substantial time spent manually summarizing documents and identifying key insights. This project presents an AI-powered Academic Assistant designed to streamline academic reading through multi-level summarization, contextual question answering, and source-grounded traceability. The system incorporates a robust preprocessing pipeline including text extraction, artifact removal, noise filtering, and section segmentation to prepare documents for accurate analysis. After assessing the limitations of traditional NLP and transformer-based summarization models, the project adopts a Large Language Model (LLM) approach using the Gemini API, enabling deeper semantic understanding, long-context processing, and flexible summarization. The assistant provides structured short, medium, and long summaries; contextual keyword extraction; and interactive question answering with transparent source highlighting. Limitations include handling complex visual content and occasional API constraints. Overall, this project demonstrates how modern LLMs, combined with tailored prompt engineering and structured preprocessing, can significantly enhance the academic document analysis workflow.


Keerthi Sudha Borra

Intellinotes – AI-POWERED DOCUMENT UNDERSTANDING PLATFORM

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

This project presents Intellinotes, an AI-powered platform that transforms educational documents into multiple learning formats to address information-overload challenges in modern education. The system leverages large language models (GPT-4o-mini) to automatically generate four complementary outputs from a single document upload: educational summaries, conversational podcast scripts, hierarchical mind maps, and interactive flashcards.

The platform employs a three-tier architecture built with Next.js, FastAPI, and MongoDB, supporting multiple document formats (PDF, DOCX, PPTX, TXT, images) through a robust parsing pipeline. Comprehensive evaluation on 30 research documents demonstrates exceptional system reliability with a 100% feature success rate across 150 tests (5 features × 30 documents), and strong semantic understanding with a semantic similarity score of 0.72.

While ROUGE scores (ROUGE-1: 0.40, ROUGE-2: 0.09, ROUGE-L: 0.17) indicate moderate lexical overlap typical of abstractive summarization, the high semantic similarity demonstrates that the system effectively captures and conveys the conceptual meaning of source documents—an essential requirement for educational content. This validation of meaning preservation over word matching represents an important contribution to evaluating educational AI systems.

The system processes documents in approximately 65 seconds with perfect reliability, providing students with comprehensive multi-modal learning materials that cater to diverse learning styles. This work contributes to the growing field of AI-assisted education by demonstrating a practical application of large language models for automated educational content generation supported by validated quality metrics.


Sowmya Ambati

AI-Powered Question Paper Generator

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Designing a well-balanced exam requires instructors to review extensive course materials, determine key concepts, and design questions that reflect appropriate difficulty and cognitive depth. This project develops an AI-powered Question Paper Generator that automates much of this process while keeping instructors in full control. The system accepts PDFs, Word documents, PPT slides, and text files, extracts their content, and builds a FAISS-based retrieval index using sentence-transformer embeddings. A large language model then generates multiple question types—MCQs, short answers, and true/false—guided by user-selected difficulty levels and Bloom’s Taxonomy distributions to ensure meaningful coverage. Each question is evaluated with a grounding score that measures how closely it aligns with the source material, improving transparency and reducing hallucination. A React frontend enables instructors to monitor progress, review questions, toggle answers, and export to PDF or Word, while an ASP.NET Core backend manages processing and metrics. The system reduces exam preparation time and enhances consistency across assessments.


George Steven Muvva

Automated Fake Content Detection Using TF-IDF-Based Machine Learning and LSTM-Driven Deep Learning Models

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid spread of misinformation across online platforms has made automated fake news detection essential. This project develops and compares machine learning (SVM, Decision Tree) and deep learning (LSTM) models to classify news headlines from the GossipCop and PolitiFact datasets as real or fake. After extensive preprocessing— including text cleaning, lemmatization, TF-IDF vectorization, and sequence tokenization—the models are trained and evaluated using standard performance metrics. Results show that SVM provides a strong baseline, but the LSTM model achieves higher accuracy and F1-scores by capturing deeper semantic and contextual patterns in the text. The study highlights the challenges of domain variation and subtle linguistic cues, while demonstrating that context-aware deep learning methods offer superior capability for automated fake content detection.


Babak Badnava

Joint Communication and Computation for Emerging Applications in Next-Generation Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Emerging applications in next-generation wireless networks, such as augmented and virtual reality (AR/VR) and autonomous vehicles, demand significant computational and communication resources at the network edge. This PhD research focuses on developing joint communication–computation solutions while incorporating various network-, application-, and user-imposed constraints. In the first thrust, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user, multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision process (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution in which edge users offload their computation tasks to nearby edge servers over selected communication channels. 

The second thrust focuses on user-driven requirements for resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of millimeter-wave (mmWave) networks, we develop a beam alignment and buffer-predictive multi-user scheduling algorithm for 2D video streaming applications. This algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across multiple users. We then extend our investigation to develop a joint rate adaptation and computation distribution framework for 3D video streaming in mmWave-based VR systems. Numerical results using real-world mmWave traces and 3D video datasets demonstrate significant improvements in video quality, rebuffering time, and quality variations perceived by users.

Finally, we develop novel edge computing solutions for multi-layer immersive video processing systems. By exploring and exploiting the elastic nature of computation tasks in these systems, we propose a multi-agent reinforcement learning (MARL) framework that incorporates two learning-based methods: the centralized phasic policy gradient (CPPG) and the independent phasic policy gradient (IPPG). IPPG leverages shared information and model parameters to learn edge offloading policies; however, during execution, each user acts independently based only on its local state information. This decentralized execution reduces the communication and computation overhead of centralized decision-making and improves scalability. We leverage real-world 4G, 5G, and WiGig network traces, along with 3D video datasets, to investigate the performance trade-offs of CPPG and IPPG when applied to elastic task computing.


Sri Dakshayani Guntupalli

Customer Churn Prediction for Subscription-Based Businesses

When & Where:


LEEP2, Room 2420

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

Customer churn is a critical challenge for subscription-based businesses, as it directly impacts revenue, profitability, and long-term customer loyalty. Because retaining existing customers is more cost-effective than acquiring new ones, accurate churn prediction is essential for sustainable growth. This work presents a machine learning based framework for predicting and analyzing customer churn, coupled with an interactive Streamlit web application that supports real time decision making. Using historical customer data that includes demographic attributes, usage behavior, transaction history, and engagement patterns, the system applies extensive data preprocessing and feature engineering to construct a modeling-ready dataset. Multiple models Logistic Regression, Random Forest, and XGBoost are trained and evaluated using the Scikit-Learn framework. Model performance is assessed with metrics such as accuracy, precision, recall, F1-score, and ROC-AUC to identify the most effective predictor of churn. The top performing models are serialized and deployed within a Streamlit interface that accepts individual customer inputs or batch data files to generate immediate churn predictions and summaries. Overall, this project demonstrates how machine learning can transform raw customer data into actionable business intelligence and provides a scalable approach to proactive customer retention management.


QiTao Weng

Anytime Computer Vision for Autonomous Driving

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Heechul Yun, Chair
Drew Davidson
Shawn Keshmiri


Abstract

Latency–accuracy tradeoffs are fundamental in real-time applications of deep neural networks (DNNs) for cyber-physical systems. In autonomous driving, in particular, safety depends on both prediction quality and the end-to-end delay from sensing to actuation. We observe that (1) when latency is accounted for, the latency-optimal network configuration varies with scene context and compute availability; and (2) a single fixed-resolution model becomes suboptimal as conditions change.

We present a multi-resolution, end-to-end deep neural network for the CARLA urban driving challenge using monocular camera input. Our approach employs a convolutional neural network (CNN) that supports multiple input resolutions through per-resolution batch normalization, enabling runtime selection of an ideal input scale under a latency budget, as well as resolution retargeting, which allows multi-resolution training without access to the original training dataset.

We implement and evaluate our multi-resolution end-to-end CNN in CARLA to explore the latency–safety frontier. Results show consistent improvements in per-route safety metrics—lane invasions, red-light infractions, and collisions—relative to fixed-resolution baselines.


Sherwan Jalal Abdullah

A Versatile and Programmable UAV Platform for Integrated Terrestrial and Non-Terrestrial Network Measurements in Rural Areas

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Shawn Keshmiri


Abstract

Reliable cellular connectivity is essential for modern services such as telehealth, precision agriculture, and remote education; yet, measuring network performance in rural areas presents significant challenges. Traditional drive testing cannot access large geographic areas between roads, while crowdsourced data provides insufficient spatial resolution in low-population regions. To address these limitations, we develop an open-source UAV-based measurement platform that integrates an onboard computation unit, commercial cellular modem, and automated flight control to systematically capture Radio Access Network (RAN) signals and end-to-end network performance metrics at different altitudes. Our platform collects synchronized measurements of signal strength (RSRP, RSSI), signal quality (RSRQ, SINR), latency, and bidirectional throughput, with each measurement tagged with GPS coordinates and altitude. Experimental results from a semi-rural deployment reveal a fundamental altitude-dependent trade-off: received signal power improves at higher altitudes due to enhanced line-of-sight conditions, while signal quality degrades from increased interference with neighboring cells. Our analysis indicates that most of the measurement area maintains acceptable signal quality, along with adequate throughput performance, for both uplink and downlink communications. We further demonstrate that strong radio signal metrics for individual cells do not necessarily translate to spatial coverage dominance such that the cell serving the majority of our test area exhibited only moderate performance, while cells with superior metrics contributed minimally to overall coverage. Next, we develop several machine learning (ML) models to improve the prediction accuracy of signal strength at unmeasured altitudes. Finally, we extend our measurement platform by integrating non-terrestrial network (NTN) user terminals with the UAV components to investigate the performance of Low-earth Orbit (LEO) satellite networks with UAV mobility. Our measurement results demonstrate that NTN offers a viable fallback option by achieving acceptable latency and throughput performance during flight operations. Overall, this work establishes a reproducible methodology for three-dimensional rural network characterization and provides practical insights for network operators, regulators, and researchers addressing connectivity challenges in underserved areas.


Satya Ashok Dowluri

Comparison of Copy-and-Patch and Meta-Tracing Compilation techniques in the context of Python

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hossein Saiedian


Abstract

Python's dynamic nature makes performance enhancement challenging. Recently, a JIT compiler using a novel copy-and-patch compilation approach was implemented in the reference Python implementation, CPython. Our goal in this work is to study and understand the performance properties of CPython's new JIT compiler. To facilitate this study, we compare the quality and performance of the code generated by this new JIT compiler with a more mature and traditional meta-tracing based JIT compiler implemented in PyPy (another Python implementation). Our thorough experimental evaluation reveals that, while it achieves the goal of fast compilation speed, CPython's JIT severely lags in code quality/performance compared with PyPy. While this observation is a known and intentional property of the copy-and-patch approach, it results in the new JIT compiler failing to elevate Python code performance beyond that achieved by the default interpreter, despite significant added code complexity. In this thesis, we report and explain our novel experiments, results, and observations.