Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Past Defense Notices

Dates

Xiangyu Chen

Toward Efficient Deep Learning for Computer Vision Applications

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Fengjun Li
Hongguo Xu

Abstract

Deep learning leads the performance in many areas of computer vision. However, after a decade of research, it tends to require larger datasets and more complex models, leading to heightened resource consumption across all fronts. Regrettably, meeting these requirements proves challenging in many real-life scenarios. First, both data collection and labeling processes entail substantial labor and time investments. This challenge becomes especially pronounced in domains such as medicine, where identifying rare diseases demands meticulous data curation. Secondly, the large size of state-of-the-art models, such as ViT, Stable Diffusion, and ConvNext, hinders their deployment on resource-constrained platforms like mobile devices. Research indicates pervasive redundancies within current neural network structures, exacerbating the issue. Lastly, even with ample datasets and optimized models, the time required for training and inference remains prohibitive in certain contexts. Consequently, there is a burgeoning interest among researchers in exploring avenues for efficient artificial intelligence.

This study endeavors to delve into various facets of efficiency within computer vision, including data efficiency, model efficiency, as well as training and inference efficiency. The data efficiency is improved from the perspective of increasing information brought by given image inputs and reducing redundancies of RGB image formats. To achieve this, we propose to integrate both spatial and frequency representations to finetune the classifier. Additionally, we propose explicitly increasing the input information density in the frequency domain by deleting unimportant frequency channels. For model efficiency, we scrutinize the redundancies present in widely used vision transformers. Our investigation reveals that trivial attention in their attention modules covers useful non-trivial attention due to its large amount. We propose mitigating the impact of accumulated trivial attention weights. To increase training efficiency, we propose SuperLoRA, a generation of LoRA adapter, to fine-tune pretrained models with few iterations and extremely-low parameters. Finally, a model simplification pipeline is proposed to further reduce inference time on mobile devices. By addressing these challenges, we aim to advance the practicality and performance of computer vision systems in real-world applications.


Krushi Patel

Image Classification & Segmentation based on Enhanced CNN and Transformer Networks

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Fengjun Li, Chair
Prasad Kulkarni
Bo Luo
Cuncong Zhong
Xinmai Yang

Abstract

Convolutional Neural Networks (CNNs) have significantly enhanced performance across various computer vision tasks such as image recognition and segmentation, owing to their robust representation capabilities. To further boost CNN performance, a self-attention module is integrated after each network layer. Transformer-based models, which leverage a multi-head self-attention module as their core component, have recently demonstrated outstanding performance. However, several challenges persist, including the limitation to class-specific channels in CNNs, the constrained receptive field in local transformers, and the incorporation of redundant features and the absence of multi-scale features in U-Net type segmentation architectures.

In our study, we propose new strategies to tackle these challenges. (1) We propose a novel channel-based self-attention module to diversify the focus more on the discriminative and significant channels, and the module can be embedded at the end of any backbone network for image classification. (2) To mitigate noise introduced by shallow encoder layers in U-Net architectures, we substitute skip connections with an Adaptive Global Context Module (AGCM). Additionally, we introduce the Semantic Feature Enhancement Module (SFEM) to enhance multi-scale features in polyp segmentation. (3) We introduce a Multi-scaled Overlapped Attention (MOA) mechanism within local transformer-based networks for image classification, facilitating the establishment of long-range dependencies and initiation of neighborhood window communication. (4) We propose a pioneering Fuzzy Attention Module designed to prioritize challenging pixels, thereby augmenting polyp segmentation performance. (5) We develop a novel dense attention gate module that aggregates features from all preceding layers to compute attention scores, refining global features in polyp segmentation tasks. Moreover, we design a new multi-layer horizontally extended decoder architecture to enhance local feature refinement in polyp segmentation.


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to separate and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. As a strategical converse, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar systems that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of Space-Frequency Template Error (SFTE) is implemented on a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on SFTE for narrowband arrays. Finally, optimization via alternating projections is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars using pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived, and experimentally validated, that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. Several modifications to the demonstrated algorithms are proposed to refine implementation, enhance performance, and reflect real-world application to the degree that numerical simulations can.


Anna Fritz

A Formally Verified Infrastructure for Negotiating Remote Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Fengjun Li
Emily Witt

Abstract

Semantic remote attestation is the process of gathering and appraising evidence to establish trust in a remote system. Remote attestation occurs at the request of an appraiser or relying party and proceeds with a target system executing an attestation protocol that invokes attestation services in a specific order to generate and bundle evidence. An appraiser may then evaluate the generated evidence to establish trust in the target's state.  In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement operation. To solve this problem, we introduce Attestation Protocol Negotiation or the process of establishing a mutually agreed upon protocol that satisfies the relying party's desire for comprehensive information and the target's desire for constrained disclosure.

    This research explores the formal modeling and verification of negotiation, introducing refinement and selection procedures to enable communicating peers to achieve their goals. First, we explore the formalization of refinement or the process by which a target generates executable protocols. Here we focus on a definition of system specifications through manifests, protocol sufficiency and soundness, policy representation, and the negotiation structure. By using our formal models to represent and verify negotiation's properties we can statically determine that a provably sound, sufficient, and executable protocol is produced. Next, we present a formalized model for protocol selection, introducing and proving a preorder over Copland remote attestation protocols to facilitate selection of the most adversary-constrained protocol. With this modeling, we prove selected protocols increase the difficulty of an active adversary. By addressing the target's capability to generate provably executable protocols and the ability to order these protocols, this methodology has the potential to revolutionize the attestation protocol provisioning process.


Arjun Dhage Ramachandra

Implementing object Detection for Real-World Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Cuncong Zhong


Abstract

 The advent of deep learning has enabled the development of powerful AI models that are being used in fields such as medicine, surveillance monitoring, optimizing manufacturing processes, allowing robots to navigate their environment, chatbots, and much more. These applications are only made possible because of the enormous research in the fields of Neural networks and deep learning. In this paper, I’ll be discussing a branch of Neural Networks called Convolution Neural Network (CNN), and how they are used for object detection tasks for detecting and classifying objects in an image. I’ll also discuss a popular object detection framework called Single Shot Multibox Detector (SSD) and implement it in my web application project which allows users to detect objects in images and search for images based on the presence of objects. The main aim of the project was to allow easy access to perform detections with a few clicks. 


Kaidong Li

Accurate and Robust Object Detection and Classification Based on Deep Neural Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Taejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao

Abstract

Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes. Ultimately, the realms of 2D image classification and 3D point cloud classification represent distinct avenues of research, each relying on significantly different architectures. Given the unique characteristics of these data types, it is not feasible to employ models interchangeably between them.

My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To gap the domain difference in 3D and 2D classification and to increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification. We experimented with various 3D-to-2D mapping algorithm, bridging the gap between 2D and 3D classification. Furthermore, we empirically validate the internal constrained optimization mechanism effectively defend adversarial attacks through implicit gradients.


Andrew Mertz

Multiple Input Single Output (MISO) Receive Processing Techniques for Linear Frequency Modulated Continuous Wave Frequency Diverse Array (LFMCW-FDA) Transmit Structures

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Patrick McCormick, Chair
Chris Allen
Shannon Blunt
James Stiles

Abstract

This thesis focuses on the multiple processing techniques that can be applied to a single receive element co-located with a Frequency Diverse Array (FDA) transmission structure that illuminates a large volume to estimate the scattering characteristics of objects within the illuminated space in the range, Doppler, and spatial dimensions. FDA transmissions consist of a number of evenly spaced transmitting elements all of which are radiating a linear frequency modulated (LFM) waveform. The elements are configured into a Uniform Linear Array (ULA) and the waveform of each element is separated by a frequency spacing across the elements where the time duration of the chirp is inversely proportional to an integer multiple of the frequency spacing between elements. The complex transmission structure created by this arrangement of multiple transmitting elements can be received and processed by a single receive element. Furthermore, multiple receive processing techniques, each with their own advantages and disadvantages, can be applied to the data received from the single receive element to estimate the range, velocity, and spatial direction of targets in the illuminated volume relative to the co-located transmit array and receive element. Three different receive processing techniques that can be applied to FDA transmissions are explored. Two of these techniques are novel to this thesis, including the spatial matched filter processing technique for FDA transmission structures, and stretch processing using virtual array processing for FDA transmissions. Additionally, this thesis introduces a new type of FDA transmission structure referred to as ”slow-time” FDA.


Ragib Shakil Rafi

Nonlinearity Assisted Mie Scattering from Nanoparticles

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Alessandro Salandrino , Chair
Shima Fardad
Morteza Hashemi
Rongqing Hui
Judy Z Wu

Abstract

Scattering by nanoparticles is an exciting branch of physics to control and manipulate light. More specifically, there have been fascinating developments regarding light scattering by sub-wavelength particles, including high-index dielectric and metal particles for their applications in optical resonance phenomena, detecting the fluorescence of molecules, enhancing Raman scattering, transferring the energy to the higher order modes, sensing, and photodetector technologies. This research area has recently gained renewed attention with the study of near-field effects at the nanoscale in advanced regimes of operation, including nonlinear effects and the time-varying parametric modulation of local material properties. When the particle size is comparable to or slightly bigger than the incident wavelength, Mie solutions to Maxwell's equations describe these electromagnetic scattering problems. The addition and excitation of nonlinear effects in these high-indexed sub-wavelength dielectric and plasmonic particles holds promise to improve the existing performance of the system or provide additional features directed toward novel applications. This dissertation explores Mie scattering from dielectric and plasmonic particles in the presence of nonlinear effects, more specifically second and third order nonlinear effects. For numerical analysis, an in-house Rigorous Coupled Analysis (RCWA) method has been developed in a Matlab environment and validated based on designing metasurfaces and comparing them with established results. For dielectrics, this dissertation presents a numerical study of the linear and nonlinear diffraction and focusing properties of dielectric metasurfaces consisting of silicon microcylinder arrays resting on a silicon substrate. Upon diffraction, such structures lead to the formation of near-field intensity profiles reminiscent of photonic nanojets and propagate similarly. The results indicate that the Kerr nonlinear effect i.e. third order nonlinear effect enhances light concentration throughout the generated photonic jet with an increase in the intensity of about 20% compared to the linear regime for the power levels considered in this work. The transverse beamwidth remains subwavelength in all cases, and the nonlinear effect reduces the full width. On the other hand, plasmonic structures give rise to localized surface plasmons and excitations of the conduction electrons within metallic nanostructures. These aren't propagating but instead confined to the vicinity of the nanostructure, interacting with the electromagnetic field. These modes emerge from the scattering between small conductive nanoparticles with an oscillating electromagnetic field. This dissertation introduces a novel mechanism to transfer energy from excited dipolar mode to such higher-order subradiant localized mode. Recent advancements in time-varying structures that help relax photon energy conservation constraints and a newly proposed plasmonic parametric resonance pave the way for this work. With the help of the second-order nonlinear wave mixing process and parametric modulation of the dielectric permittivity in a medium surrounding metal particles, we have introduced a way to accomplish the otherwise nearly impossible task to selectively couple energy into specific high order modes of a nanostructures. This work further shows that the oscillating mode amplitude reaches a steady state, and the steady state establishes the ideal modulation conditions that enhance the amplitude of the high-order mode.


Ben Liu

Computational Microbiome Analysis: Method Development, Integration and Clinical Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Cuncong Zhong, Chair
Esam El-Araby
Bo Luo
Zijun Yao
Mizuki Azuma

Abstract

Metagenomics is the study of microbial genomes from one common environment. Metagenomic data is directly derived from all microorganisms present in the environmental samples, in- including those inaccessible through conventional methods like laboratory cultures. Thus it offers an unbiased view of microbial communities, enabling researchers to explore not only the taxonomic composition (identifying which microorganisms are present) but also the community’s metabolic functions.

The metagenomic data consists of a huge number of fragmented DNA sequences from diverse microorganisms with different abundance. These characteristics pose challenges to analysis and impede practical applications. Firstly, the development of an efficient detection tool for a specific target from metagenomic data is confronted by the challenge of daunting data size. Secondly, the accuracy of the detection tool is also challenged by the incompleteness of metagenomic data. Thirdly, numerous analysis tools are designed for individual detection targets, and many detection targets are contained within the data, there is a need for comprehensive and scalable integration of existing resources.

In this dissertation, we conducted the computational microbiome analysis at different levels: (1) We first developed an assembly graph-based ncRNA searching tool, named DRAGoM, to im- improve the detection quality in metagenomic data. (2) We then developed an automatic detection model, named SNAIL, to automatically detect names of bioinformatic resources from biomedical literature for comprehensive and scalable organizing resources. We also developed a method to automatically annotate sentences for training SNAIL, which not only benefits the performance of SNAIL but also allows it to be trained on both manual and machine-annotated data, thus minimizing the need for extensive manual data labeling efforts. (3) We applied different analyzing tools to metagenomic datasets from a series of clinical studies and developed models to predict therapeutic benefits from immunotherapy in non-small-cell lung cancer patients using human gut microbiome signatures.


Amin Shojaei

Exploring Cooperative and Robust Multi-Agent Reinforcement Learning in Networked Cyber-Physical Systems: Applications in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alex Bardas
Taejoon Kim
Prasad Kulkarni
Shawn Keshmiri

Abstract

Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicle (EV). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize the grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.

As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. Within this context, first, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state. Secondly, we investigate the challenges associated with distributed MARL techniques, with a special focus on the central training distributed execution (CTDE) methods. Throughout this research, we highlight the significance of cooperation in MARL for achieving autonomous control in smart grid systems and other cyber-physical domains. Thirdly, we propose a novel robust MARL framework using a hierarchical structure. We perform an extensive analysis and evaluation of our proposed hierarchical MARL model for large-scale EV networks, thereby addressing the scalability and robustness challenges as the number of agents within a NCPS increases.