Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Past Defense Notices

Dates

Nyamtulla Shaik

AI Vision to Care: A QuadView of Deep Learning for Detecting Harmful Stimming in Autism

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Bo Luo
Dongjie Wang


Abstract

Stimming refers to repetitive actions or behaviors used to regulate sensory input or express feelings. Children with developmental disorders like autism (ASD) frequently perform stimming. This includes arm flapping, head banging, finger flicking, spinning, etc. This is exhibited by 80-90% of children with Autism, which is seen in 1 among 36 children in the US. Head banging is one of these self-stimulatory habits that can be harmful. If these behaviors are automatically identified and notified using live video monitoring, parents and other caregivers can better watch over and assist children with ASD.
Classifying these actions is important to recognize harmful stimming, so this study focuses on developing a deep learning-based approach for stimming action recognition. We implemented and evaluated four models leveraging three deep learning architectures based on Convolutional Neural Networks (CNNs), Autoencoders, and Vision Transformers. For the first time in this area, we use skeletal joints extracted from video sequences. Previous works relied solely on raw RGB videos, vulnerable to lighting and environmental changes. This research explores Deep Learning based skeletal action recognition and data processing techniques for a small unstructured dataset that consists of 89 home recorded videos collected from publicly available sources like YouTube. Our robust data cleaning and pre-processing techniques helped the integration of skeletal data in stimming action recognition, which performed better than state-of-the-art with a classification accuracy of up to 87%
In addition to using traditional deep learning models like CNNs for action recognition, this study is among the first to apply data-hungry models like Vision Transformers (ViTs) and Autoencoders for stimming action recognition on the dataset. The results prove that using skeletal data reduces the processing time and significantly improves action recognition, promising a real-time approach for video monitoring applications. This research advances the development of automated systems that can assist caregivers in more efficiently tracking stimming activities.


Alexander Rodolfo Lara

Creating a Faradaic Efficiency Graph Dataset Using Machine Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Sumaiya Shomaji
Kevin Leonard


Abstract

Just as the internet-of-things leverages machine learning over a vast amount of data produced by an innumerable number of sensors, the Internet of Catalysis program uses similar strategies with catalysis research. One application of the Internet of Catalysis strategy is treating research papers as datapoints, rich with text, figures, and tables. Prior research within the program focused on machine learning models applied strictly over text.

This project is the first step of the program in creating a machine learning model from the images of catalysis research papers. Specifically, this project creates a dataset of faradaic efficiency graphs using transfer learning from pretrained models. The project utilizes FasterRCNN_ResNet50_FPN, LayoutLMv3SequenceClassification, and computer vision techniques to recognize figures, extract all graphs, then classify the faradaic efficiency graphs.

Downstream of this project, researchers will create a graph reading model to integrate with large language models. This could potentially lead to a multimodal model capable of fully learning from images, tables, and texts of catalysis research papers. Such a model could then guide experimentation on reaction conditions, catalysts, and production.


Amin Shojaei

Scalable and Cooperative Multi-Agent Reinforcement Learning for Networked Cyber-Physical Systems: Applications in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alex Bardas
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicles (EVs). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.

As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. First, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state.

Second, we focus on the cooperative behavior of agents in distributed MARL frameworks, particularly under the central training with decentralized execution (CTDE) paradigm. We provide theoretical results and variance analysis for stochastic and deterministic cooperative MARL algorithms, including Multi-Agent Deep Deterministic Policy Gradient (MADDPG), Multi-Agent Proximal Policy Optimization (MAPPO), and Dueling MAPPO. These analyses highlight how coordinated learning can improve system-wide decision-making in uncertain and dynamic environments like EV networks.

Third, we address the scalability challenge in large-scale NCPS by introducing a hierarchical MARL framework based on a cluster-based architecture. This framework organizes agents into coordinated subgroups, improving scalability while preserving local coordination. We conduct a detailed variance analysis of this approach to demonstrate its effectiveness in reducing communication overhead and learning complexity. This analysis establishes a theoretical foundation for scalable and efficient control in large-scale smart grid applications.


Asrith Gudivada

Custom CNN for Object State Classification in Robotic Cooking

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

This project presents the development of a custom Convolutional Neural Network (CNN) designed to classify object states—such as sliced, diced, or peeled—in cooking environments. Recognizing fine-grained object states is essential for context-aware manipulation but remains challenging due to visual similarity between states and a limited dataset. To address these challenges, I built a lightweight CNN from scratch, deliberately avoiding pretrained models to maintain domain specificity and efficiency. The model was enhanced through data augmentation and optimized dropout layers, with additional experiments incorporating batch normalization, Inception modules, and residual connections. While these advanced techniques offered incremental improvements during experimentation, the final model—a combination of data augmentation, dropout, and batch normalization—achieved ~60% validation accuracy and demonstrated stable generalization. This work highlights the trade-offs between model complexity and performance in constrained environments and contributes toward real-time state recognition with potential applications in assistive technologies.


Tanvir Hossain

Gamified Learning of Computing Hardware Fundamentals Using FPGA-Based Platform

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Tamzidul Hoque, Chair
Esam El-Araby
Sumaiya Shomaji


Abstract

The growing dependence on electronic systems in consumer and mission critical domains requires engineers who understand the inner workings of digital hardware. Yet many students bypass hardware electives, viewing them as abstract, mathematics heavy, and less attractive than software courses. Escalating workforce shortages in the semiconductor industry and the recent global chip‑supply crisis highlight the urgent need for graduates who can bridge hardware knowledge gaps across engineering sectors. In this thesis, I have developed FPGA‑based games, embedded in inclusive curricular modules, which can make hardware concepts accessible while fostering interest, self‑efficacy, and positive outcome expectations in hardware engineering. A design‑based research methodology guided three implementation cycles: a pilot with seven diverse high‑school learners, a multiweek residential summer camp with high‑school students, and a fifteen‑week multidisciplinary elective enrolling early undergraduate engineering students. The learning experiences targeted binary arithmetic, combinational and sequential logic, state‑machine design, and hardware‑software co‑design. Learners also moved through the full digital‑design flow, HDL coding, functional simulation, synthesis, place‑and‑route, and on‑board verification. In addition, learners explored timing analysis, register‑transfer‑level abstractions, and simple processor datapaths to connect low‑level circuits with system‑level behavior. Mixed‑method evidence was gathered through pre‑ and post‑content quizzes, validated surveys of self‑efficacy and outcome expectations, focus groups, classroom observations, and gameplay analytics. Paired‑sample statistics showed reliable gains in hardware‑concept mastery, self‑efficacy, and outcome expectations. This work contributes a replicable framework for translating foundational hardware topics into modular, game‑based learning activities, empirical evidence of their effectiveness across secondary and early‑college contexts, and design principles for educators who seek to integrate equitable, hands‑on hardware experiences into existing curricula.


Hara Madhav Talasila

Radiometric Calibration of Radar Depth Sounder Data Products

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Patrick McCormick
James Stiles
Jilu Li
Leigh Stearns

Abstract

Although the Center for Remote Sensing of Ice Sheets (CReSIS) performs several radar calibration steps to produce Operation IceBridge (OIB) radar depth sounder data products, these datasets are not radiometrically calibrated and the swath array processing uses ideal (rather than measured [calibrated]) steering vectors. Any errors in the steering vectors, which describe the response of the radar as a function of arrival angle, will lead to errors in positioning and backscatter that subsequently affect estimates of basal conditions, ice thickness, and radar attenuation. Scientific applications that estimate physical characteristics of surface and subsurface targets from the backscatter are limited with the current data because it is not absolutely calibrated. Moreover, changes in instrument hardware and processing methods for OIB over the last decade affect the quality of inter-seasonal comparisons. Recent methods which interpret basal conditions and calculate radar attenuation using CReSIS OIB 2D radar depth sounder echograms are forced to use relative scattering power, rather than absolute methods.

As an active target calibration is not possible for past field seasons, a method that uses natural targets will be developed. Unsaturated natural target returns from smooth sea-ice leads or lakes are imaged in many datasets and have known scattering responses. The proposed method forms a system of linear equations with the recorded scattering signatures from these known targets, scattering signatures from crossing flight paths, and the radiometric correction terms. A least squares solution to optimize the radiometric correction terms is calculated, which minimizes the error function representing the mismatch in expected and measured scattering. The new correction terms will be used to correct the remaining mission data. The radar depth sounder data from all OIB campaigns can be reprocessed to produce absolutely calibrated echograms for the Arctic and Antarctic. A software simulator will be developed to study calibration errors and verify the calibration software. The software for processing natural targets and crossovers will be made available in CReSIS’s open-source polar radar software toolbox. The OIB data will be reprocessed with new calibration terms, providing to the data user community a complete set of radiometrically calibrated radar echograms for the CReSIS OIB radar depth sounder for the first time.


Christopher Ord

A Hardware-Agnostic Simultaneous Transmit And Receive (STAR) Architecture for the Transmission of Non-Repeating FMCW Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rachel Jarvis, Chair
Shannon Blunt
Patrick McCormick


Abstract

With the increasing congestion of the usable RF spectrum, it is increasingly necessary for communication and radar systems to share the same frequencies without disturbing one another. To accomplish this, research has focused on designing a class of non-repeating radar waveforms that appear as noise at the receiver of uncooperative systems, but the peak power from high-power pulsed systems can still overwhelm nearby in-band systems. Therefore, to minimize peak power while maximizing the total energy on target, radar systems must transition to operating at a 100% duty cycle, which inherently requires Simultaneous Transmit and Receive (STAR) operation.

One inherent difficulty when operating monostatic STAR systems is the direct path coupling interference that can saturate a number of components in the radar’s receive chain, which makes digital processing methods that remove this interference ineffective. This thesis proposes a method to reduce the self-interference between the radar’s transmitter in receiver prior to the receiver’s sensitive components to increase the power that the radar can transmit at. By using a combination of tests that manipulate the timing, phase, and magnitude of a secondary waveform that is injected into the radar just before the receiver, upwards of 35.0 dB of self-interference cancellation is achieved for radar waveforms with bandwidths of up to 100 MHz at both S-band and X-band in both simulation and open-air testing.


Fatima Al-Shaikhli

Optical Fiber Measurements: Leveraging Coherent FMCW Techniques

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical fiber technology have proven to be invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical fiber sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceiver systems to develop novel measurement techniques for characterizing optical fiber properties. Specifically, our goal is to leverage a digitally chirped frequency-modulated continuous wave (FMCW) to extract detailed information about optical fiber characteristics, as well as target range. Through this approach, we aim to enable more accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) self-homodyne coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection, and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.         

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Multi-target detection is demonstrated experimentally, and while only amplitude modulation is required in the LiDAR transmitter, the phase-diversity coherent receiver enables simultaneous detection of both range and velocity for each target, along with the sign of the target’s velocity.

In addition, we demonstrate a polarization-sensitive OFDR system utilizing a commercially available digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately , a chirping bandwidth, and a measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we can measure birefringence vectors along the fiber, providing not only the magnitude of birefringence but also the direction of any external pressure applied to the fiber.


Landen Doty

Assessing the Effects of Source Language on Binary Similarity Tools

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Alex Bardas
Drew Davidson

Abstract

Binary similarity is a fundamental technique that enables software analysis practitioners to compare machine-level code at scale and with fine granularity. With application in software reverse engineering, vulnerability research, malware attribution and more, state-of-the-art binary similarity tools have undergone thorough research and development to account for variations in compilers, optimizations, machine architectures, and even obfuscations. And, although these tools aim to compare and detect binary-level code segments generated from similar or identical source code, no preexisting work has investigated the effects of source languages other than C and C++. This thesis addresses this research gap by presenting a thorough investigation of SOTA binary similarity tools when applied to modern compiled languages, Rust and Golang.

To adequately evaluate the capabilities of the available binary similarity approaches, this work includes three distinct tools - BSim, a new component of the Ghidra Software Reverse Engineering Framework, which utilizes a clustering based similarity mechanism; BinDiff, an industry-recognized tool using graph-based comparisons; and jTrans, a BERT-based model fine-tuned to the binary similarity task. First, to enable this work, we introduce a new dataset of Rust and Golang binaries compiled from leading open-source projects in the Homebrew and Arch Linux repositories. Comprised of 800 binaries and over 1 million functions, this dataset was built to represent a broad range of implementation styles, application diversity, and source language features. Next, the main investigation of this thesis is presented wherein we asses each approach's ability to accurately report semantically equivalent functions compiled from the same source code. Results across the three tools reveal a systematic degradation of precision when comparing binaries produced by Rust and Go rather than those produced by C and C++. Finally, we provide a technical demonstration which highlights the implications of these results and discuss near- and long-term solutions to more adequately equip binary analysis practitioners.  
 


Liangqin Ren

Understanding and Mitigating Security Risks towards Trustworthy Deep Learning Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Bo Luo
Zijun Yao
Xinmai Yang

Abstract

Deep learning is widely used in healthcare, finance, and other critical domains, raising concerns about system trustworthiness. However, deep learning models and data still face three types of critical attacks: model theft, identity impersonation, and abuse of AI-generated content (AIGC). To address model theft, homomorphic encryption has been explored for privacy-preserving inference, but it remains highly inefficient. To counter identity impersonation, prior work focuses on detection, disruption, and tracing—yet fails to protect source and target images simultaneously. To prevent AIGC abuse, methods like evaluation, watermarking, and machine unlearning exist, but text-driven image editing remains largely unprotected.

This report addresses the above challenges through three key designs. First, to enable privacy-preserving inference while accelerating homomorphic encryption, we propose PrivDNN, which selectively encrypts the most critical model parameters, significantly reducing encrypted operations. We design a selection score to evaluate neuron importance and use a greedy algorithm to iteratively secure the most impactful neurons. Across four models and datasets, PrivDNN reduces encrypted operations by 85%–98%, and cuts inference time and memory usage by over 97% while preserving accuracy and privacy. Second, to counter identity impersonation in deepfake face-swapping, where both the source and target can be exploited, we introduce PhantomSeal, which embeds invisible perturbations to encode a hidden “cloak” identity. When used as a target, the resulting content displays visible artifacts; when used as a source, the generated deepfake is altered to resemble the cloak identity. Evaluations across two generations of deepfake face-swapping show that PhantomSeal reduces attack success from 97% to 0.8%, with 95% of outputs recognized as the cloak identity, providing robust protection against manipulation. Third, to prevent AIGC abuse, we construct a comprehensive dataset, perform large-scale human evaluation, and establish a benchmark for detecting AI-generated artwork to better understand abuse risks in AI-generated content. Building on this direction, we propose Protecting Copyright against Image Editing (PCIE) to address copyright infringement in text-driven image editing. PCIE embeds an invisible copyright mark into the original image, which transforms into a visible watermark after text-driven editing to automatically reveal ownership upon unauthorized modification.