Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
David Felton
Optimization and Evaluation of Physical Complementary Radar WaveformsWhen & Where:
Nichols Hall, Room 129 (Apollo Auditorium)
Committee Members:
Shannon Blunt, ChairRachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata
Abstract
The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.
Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.
Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems.
This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing.
Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.
Hao Xuan
Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge DiscoveryWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Cuncong Zhong, ChairFengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu
Abstract
Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.
These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.
First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.
Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.
Pramil Paudel
Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless ImagingWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Fengjun Li, ChairAlex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao
Abstract
Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference.
We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks.
Sharmila Raisa
Digital Coherent Optical System: Investigation and MonitoringWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han
Abstract
Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.
We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.
After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.
In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.
Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.
Past Defense Notices
PAUL KLINE
Remote Attestation Protocol Verification with a Privacy EmphasisWhen & Where:
246 Nichols Hall
Committee Members:
Perry Alexander, ChairPrasad Kulkarni
Garrett Morris
Abstract
Remote attestation is innately challenging and wrought with auxiliary challenges. Even determining what information to request can be a challenge. In cases when a presumptuous request is denied, mutual trust can be built incrementally to achieve the same result. All the while, we must 1) Respect our own privacy policy not revealing more than necessary; 2) Respond to counter-attestation requests to build trust slowly; 3) Avoid“Measurement Deadlock” situations by handling cycles. In addition to these guidelines, there are basic properties of a remote attestation procedure that should be verified. One such property is ensuring parties send and receive messages harmoniously. Using the theorem prover Coq we explore designing, modeling, and verifying a mutual remote attestation procedure via an imperative protocol language that supports dynamically generating execution steps to perform a mutually agreeable attestation protocol from nothing other than a party’s initial privacy policy.
SUMANT PATHAK
A Performance and Channel Spacing Analysis of LDPC Coded APSKWhen & Where:
246 Nichols Hall
Committee Members:
Erik Perrins, ChairShannon Blunt
Taejoon Kim
Abstract
Amplitude-Phase Shift Keying (APSK) is a linear modulation format suitable for use in aeronautical telemetry due to it’s low peak-to-average power ratio (PAPR). How- ever, since the PAPR of APSK is not exactly unity (0 dB) in practice it must be used with power amplifiers operating with backoff. To compensate for the loss in power efficiency this work considers the pairing of Low-Density Parity Check (LDPC) codes with APSK. We consider the combinations of 16 and 32-APSK with rate 1/2, 2/3, 3/4, and 4/5 AR4JA LDPC codes with optimal and sub-optimal reduced complexity decoding algorithms. The loss in power efficiency due to sub-optimal decoding is characterized and the overall performance is compared to SOQPSK-TG to approximate the backoff capacity of a coded-APSK system. Another advantage of APSK based telemetry systems is the improved bandwidth efficiency. The second part of this work considers the adjacent channel spacing of a system with multiple configurations using coded-APSK and SOQPSK-TG. We consider different combinations of 16 and 32-APSK and SOQPSK-TG and find the minimum spacing between the respective waveforms that does not distort system performance.
DAVID MENAGER
A Cognitive Systems Approach to Explainable AutonomyWhen & Where:
2001B Eaton Hall
Committee Members:
Arvin Agah, ChairDongkyu Choi, co-chair
Michael Branicky
Andrew Williams
Abstract
Human computer interaction (HCI) and artificial intelligence (AI) research have greatly progressed over the years. Work in HCI aims to create cyberphysical systems that facilitate good interactions with humans, while artificial intelligence work aims to understand the causes of intelligent behavior and reproduce them on a computer. To this point, HCI systems typically avoid the AI problem, and AI researchers typically have focused on building system that work alone or with other AI systems, but de-emphasise human collaboration. In this thesis, we examine the role of episodic memory in constructing intelligent agents that can collaborate with and learn from humans. We present our work showing that agents with episodic memory capabilities can expose their internal decision-making process to users, and that an agent can learn relational planning operators from episodic traces.
KRISHNA TEJA KARIDI
Improvements to the CReSIS HF-VHF Sounder and UHF Accumulation RadarWhen & Where:
317 Nichols Hall
Committee Members:
Carl Leuschen, ChairFernando Rodriquez-Morales, Co-Chair
Chris Allen
Abstract
This thesis documents the improvements made to a UHF radar system for snow accumulation measurements and the implementation of an airborne HF radar system for ice sounding. The HF sounder radar was designed to operate at two discrete frequency bands centered at 14.1 MHz and 31.5 MHz with a peak power level of 1 kW, representing an order-of-magnitude increase over earlier implementations. A custom transmit/receive module was developed with a set of lumped-element impedance matching networks suitable for integration on a Twin Otter Aircraft. The system was integrated and deployed to Greenland in 2016, showing improved detection capabilities for the ice/bottom interface in some areas of Jakobshavn Glacier and the potential for cross-track aperture formation to mitigate surface clutter. The performance of the UHF radar (also known as the CReSIS Accumulation radar) was significantly improved by transitioning from a single channel realization with 5-10 Watts peak transmit power into a multi-channel system with 1.6 kW. This was accomplished through developing custom transmit/receive modules capable of handling 400-W peak per channel and fast switching, incorporating a high-speed waveform generator and data acquisition system, and upgrading the baluns which feed the antenna elements. We demonstrated dramatically improved observation capabilities over the course of two different field seasons in Greenland onboard the NASA P-3.
SRAVYA ATHINARAPU
Model Order Estimation and Array Calibration for Synthetic Aperture Radar TomographyWhen & Where:
317 Nichols Hall
Committee Members:
Jim Stiles, ChairJohn Paden, Co-Chair
Shannon Blunt
Abstract
The performance of several methods to estimate the number of source signals impinging on a sensor array are compared using a traditional simulator and their performance for synthetic aperture radar tomography is discussed as it is useful in the fields of radar and remote sensing when multichannel arrays are employed. All methods use the sum of the likelihood function with a penalty term. We consider two signal models for model selection and refer to these as suboptimal and optimal. The suboptimal model uses a simplified signal model and the model selection and direction of arrival estimation are done in separate steps. The optimal model uses the actual signal model and the model selection and direction of arrival estimation are done in the same step. In the literature, suboptimal model selection is used because of computational efficiency, but in our radar post processing we are less time constrained and we implement the optimal model for the estimation and compare the performance results. Interestingly we find several methods discussed in the literature do not work using optimal model selection, but can work if the optimal model selection is normalized. We also formulate a new penalty term, numerically tuned so that it gives optimal performance over a particular set of operating conditions, and compare this method as well. The primary contribution of this work is the development of an optimizer that finds a numerically tuned penalty term that outperforms current methods and discussion of the normalization techniques applied to optimal model selection. Simulation results show that the numerically tuned model selection criteria is optimal and that the typical methods do not do well for low snapshots which are common in radar and remote sensing applications. We apply the algorithms to data collected by the CReSIS radar depth sounder and discuss the results.
In addition to model order estimation, array model errors should be estimated to improve direction of arrival estimation. The implementation of a parametric-model is discussed for array calibration that estimates the first and second order array model errors. Simulation results for the gain, phase and location errors are discussed.
PRANJALI PARE
Development of a PCB with Amplifier and Discriminator for the Timing Detector in CMS-PPSWhen & Where:
2001B Eaton Hall
Committee Members:
Chris Allen, ChairChristophe Royon, Co-Chair
Ron Hui
Carl Leuschen
Abstract
The Compact Muon Solenoid - Precision Proton Spectrometer (CMS-PPS) detector at the Large Hadron Collider (LHC) operates at high luminosity and is designed to measure forward scattered protons resulting from proton-proton interactions involving photon and Pomeron exchange processes. The PPS uses tracking and timing detectors for these measurements. The timing detectors measure the arrival time of the protons on each side of the interaction and their difference is used to reconstruct the vertex of the interaction. A good time precision (~10ps) on the arrival time is desired to have a good precision (~2mm) on the vertex position. The time precision is approximately equal to the ratio of the Root Mean Square (RMS) noise to the slew rate of the signal obtained from the detector.
Components of the timing detector include Ultra-Fast Silicon Detector (UFSD) sensors that generate a current pulse, transimpedance amplifier with shaping, and a discriminator. This thesis discusses the circuit schematic and simulations of an amplifier designed to have a time precision and the choice and simulation of discriminators with Low Voltage Differential Signal (LVDS) outputs. Additionally, details on the Printed Circuit Board (PCB) design including arrangement of components, traces, and stackup have been discussed for a 6-layer PCB that houses these three components. The PCB board has been manufactured and test results were performed to assess the functionality.
AMIR MODARRESI
Network Resilience Architecture and Analysis for Smart CitiesWhen & Where:
246 Nichols Hall
Committee Members:
James Sterbenz, ChairVictor Frost
Fengjun Li
Bo Luo
Cetinkaya Egemen
Abstract
The Internet of Things (IoT) is evolving rapidly to every aspect of human life including healthcare, homes, cities, and driverless vehicles that makes humans more dependent on the Internet and related infrastructure. While many researchers have studied the structure of the Internet that is resilient as a whole, new studies are required to investigate the resilience of the edge networks in which people and “things” connect to the Internet. Since the range of service requirements varies at the edge of the network, a wide variety of protocols are needed. In this research proposal, we survey standard protocols and IoT models. Next, we propose an abstract model for smart homes and cities to illustrate the heterogeneity and complexity of network structure. Our initial results show that the heterogeneity of the protocols has a direct effect on the IoT and smart city resilience. As the next step, we make a graph model from the proposed model and do graph theoretic analysis to recognize the fundamental behavior of the network to improve its robustness. We perform the process of improvement through modifying topology, adding extra nodes, and links when necessary. Finally, we will conduct various simulation studies on the model to validate its resilience.
VENKAT VADDULA
Content Analysis in Microblogging CommunitiesWhen & Where:
2001B Eaton Hall
Committee Members:
Nicole Beckage, ChairJerzy Grzymala-Busse
Bo Luo
Abstract
People use online social networks like Twitter to communicate and discuss a variety of topics. This makes these social platforms an import source of information. In the case of Twitter, to make sense of this source of information, understanding the content of tweets is important in understanding what is being discussed on these social platforms and how ideas and opinions of a group are coalescing around certain themes. Although there are many algorithms to classify(identify) the topics, the restricted length of the tweets and usage of jargon, abbreviations and urls make it hard to perform without immense expertise in natural language processing. To address the need for content analysis in twitter that is easily implementable, we introduce two measures based on the term frequency to identify the topics in the twitter microblogging environment. We apply these measures to the tweets with hashtags related to the Pulse night club shooting in Orlando that happened on June 12, 2016. This event is branded as both terrorist attack and hate crime and different people on twitter tweeted about this event differently forming social network communities, making this a fitting domain to explore our algorithms ability to detect the topics of community discussions on twitter. Using community detection algorithms, we discover communities in twitter. We then use frequency statistics and Monte Carlo simulation to determine the significance of certain hashtags. We show that this approach is capable of uncovering differences in community discussions and propose this method as a means to do community based content detection.
TEJASWINI JAGARLAMUDI
Community-based Content Analysis of the Pulse Night Club ShootingWhen & Where:
2001B Eaton Hall
Committee Members:
Nicole Beckage, ChairPrasad Kulkarni
Fengjun Li
Abstract
On June 12, 2016, 49 people were killed and another 58 wounded in an attack at Pulse Nightclub in Orlando Florida. This incident was regarded as both hate crime against LGBT people and as a terrorist attack. This project focuses on analyzing tweets a week after the terrorist attack, specifically looking at how different communities within twitter were discussing this event. To understand how the twitter users in different communities are discussing this event, a set of hashtag frequency-based evaluation measures and simulations are proposed. The simulations are used to assess the specific hashtag content of a community. Using community detection algorithms and text analysis tools, significant topics that specific communities are discussing and the topics that are being avoided by those communities are discovered.
NIHARIKA GANDHARI
A Comparative Study on Strategies of Rule Induction for Incomplete DataWhen & Where:
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, ChairPerry Alexander
Bo Luo
Abstract
Rule Induction is one of the major applications of rough set theory. However, traditional rough set models cannot deal with incomplete data sets. Missing values can be handled by data pre-processing or extension of rough set models. Two data pre-processing methods and one extension of the rough set model are considered in this project. These being filling in missing values with most common data, ignoring objects by deleting records and extended discernibility matrix. The objective is to compare these methods in terms of stability and effectiveness. All three methods have same rule induction method and are analyzed based on test accuracy and missing attribute level percentage. To better understand the properties of these approaches, eight real-life data-sets with varying level of missing attribute values are used for testing. Based on the results, we discuss the relative merits of three approaches in an attempt to decide upon optimal approach. The final conclusion is that the best method is to use a pre-processing method which is filling in missing values with most common data.