Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Alfred Fontes
Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope ModulationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jonathan Owen
Abstract
Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.
A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal.
The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Different Pumping ConfigurationsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.
Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.
The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.
As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.
Audrey Mockenhaupt
Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target RecognitionWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jon Owen
Abstract
Pending.
Rich Simeon
Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry ApplicationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Erik Perrins, ChairShannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin
Abstract
The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.
A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.
Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.
Mohammad Ful Hossain Seikh
AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield AssessmentWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Jim Stiles, ChairRachel Jarvis
Alessandro Salandrino
Abstract
This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.
Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths, and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.
AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.
Soumya Baddham
Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content ModerationWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Hongyang Sun
Abstract
With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.
This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle. The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.
The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.
Manu Chaudhary
Utilizing Quantum Computing for Solving Multidimensional Partial Differential EquationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Esam El-Araby, ChairPerry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan
Abstract
Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.
In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.
Past Defense Notices
Michael Neises
VERIAL: Verification-Enabled Runtime Integrity Attestation of LinuxWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Perry Alexander, ChairDrew Davidson
Cuncong Zhong
Matthew Moore
Michael Murray
Abstract
Runtime attestation is a way to gain confidence in the current state of a remote target.
Layered attestation is a way of extending that confidence from one component to another.
Introspective solutions for layered attestation require strict isolation.
The seL4 is uniquely well-suited to offer kernel properties sufficient to achieve such isolation.
I design, implement, and evaluate introspective measurements and the layered runtime attestation of a Linux kernel hosted by the seL4.
VERIAL can detect diamorphine-style rootkits with performance cost comparable to previous work.
Ibikunle Oluwanisola
Towards Generalizable Deep Learning Algorithms for Echogram Layer TrackingWhen & Where:
Nichols Hall, Room 317 (Richard K. Moore Conference Room)
Committee Members:
Shannon Blunt, ChairCarl Leuschen
James Stiles
Christopher Depcik
Abstract
The accelerated melting of ice sheets in Greenland and Antarctica, driven by climate warming, is significantly contributing to global sea level rise. To better understand this phenomenon, airborne radars have been deployed to create echogram images that map snow accumulation patterns in these regions. Utilizing advanced radar systems developed by the Center for Remote Sensing and Integrated Systems (CReSIS), around 1.5 petabytes of climate data have been collected. However, extracting ice-related information, such as accumulation rates, remains limited due to the largely manual and time-consuming process of tracking internal layers in radar echograms. This highlights the need for automated solutions.
Machine learning and deep learning algorithms are well-suited for this task, given their near-human performance on optical images. The overlap between classical radar signal processing and machine learning techniques suggests that combining concepts from both fields could lead to optimized solutions.
In this work, we developed custom deep learning algorithms for automatic layer tracking (both supervised and self-supervised) to address the challenge of limited annotated data and achieve accurate tracking of radiostratigraphic layers in echograms. We introduce an iterative multi-class classification algorithm, termed “Row Block,” which sequentially tracks internal layers from the top to the bottom of an echogram based on the surface location. This approach was used in an active learning framework to expand the labeled dataset. We also developed deep learning segmentation algorithms by framing the echogram layer tracking problem as a binary segmentation task, followed by post-processing to generate vector-layer annotations using a connected-component 1-D layer-contour extractor.
Additionally, we aimed to provide the deep learning and scientific communities with a large, fully annotated dataset. This was achieved by synchronizing radar data with outputs from a regional climate model, creating what are currently the two largest machine-learning-ready Snow Radar datasets available, with 10,000 and 50,000 echograms, respectively.
Durga Venkata Suraj Tedla
AI DIETICIANWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
David Johnson, ChairPrasad Kulkarni
Jennifer Lohoefener
Abstract
The artificially intelligent Dietician Web application is an innovative piece of technology that makes use of artificial intelligence to offer individualised nutritional guidance and assistance. This web application uses advanced machine learning algorithms and natural language processing to provide users with individualized nutritional advice and assistance in meal planning. Users who are interested in improving their eating habits can benefit from this bot. The system collects relevant data about users' dietary choices, as well as information about calories, and provides insights into body mass index (BMI) and basal metabolic rate (BMR) through interactive conversations, resulting in tailored recommendations. To enhance its capacity for prediction, a number of classification methods, including naive Bayes, neural networks, random forests, and support vector machines, were utilised and evaluated. Following an exhaustive analysis, the model that proved to be the most effective random forest is selected for the purpose of incorporating it into the development of the artificial intelligence Dietician Web application. The purpose of this study is to emphasise the significance of the artificial intelligence Dietician Web application as a versatile and intelligent instrument that encourages the adoption of healthy eating habits and empowers users to make intelligent decisions regarding their dietary requirements.
Mohammed Atif Siddiqui
Understanding Soccer Through Data ScienceWhen & Where:
Learned Hall, Room 2133
Committee Members:
Zijun Yao, ChairTamzidul Hoque
Hongyang Sun
Abstract
Data science is revolutionizing the world of sports by uncovering hidden patterns and providing profound insights that enhance performance, strategy, and decision-making. This project, "Understanding Soccer Through Data Science," exemplifies the transformative power of data analytics in sports. By leveraging Graph Neural Networks (GNNs), this project delves deep into the intricate passing dynamics within soccer teams.
A key innovation of this project is the development of a novel metric called PassNetScore, which aims to contextualize and provide meaningful insights into passing networks—a popular application of graph network theory in soccer. Utilizing the Statsbomb Event Data, which captures every event during a soccer match, including passes, shots, fouls, and substitutions, this project constructs detailed passing network graphs. Each player is represented as a node, and each pass as an edge, creating a comprehensive representation of team interactions on the pitch. The project harnesses the power of Spektral, a Python library for graph deep learning, to build and analyze these graphs. Key node features include players' average positions, total passes and expected threat of passes, while edges encapsulate the passing interactions and pass counts.
The project explores two distinct models to calculate PassNetScore through predicting match outcomes. The first model is a basic GNN that employs a binary adjacency matrix to represent the presence or absence of passes between players. This model captures the fundamental structure of passing networks, highlighting key players and connections within the team. There are three variations of this model, each building on the binary model by adding new features to nodes or edges. The second model integrates GNN with Long Short-Term Memory (LSTM) networks to account for temporal dependencies in passing sequences. This advanced model provides deeper insights into how passing patterns evolve over time and how these dynamics impact match outcomes. To evaluate the effectiveness of these models, a suite of graph theory metrics is employed. These metrics illuminate the dynamics of team play and the influence of individual players, offering a comprehensive assessment of the PassNet Score metric.
Through this innovative approach, the project demonstrates the powerful application of GNNs in sports analytics and offers a novel metric for evaluating passing networks based on match outcomes. This project paves the way for new strategies and insights that could revolutionize how teams analyze and improve their gameplay, showcasing the profound impact of data science in sports.
Amalu George
Enhancing the Robustness of Bloom Filters by Introducing DynamicityWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
Sumaiya Shomaji, ChairHongyang Sun
Han Wang
Abstract
A Bloom Filter (BF) is a compact and space-efficient data structure that efficiently handles membership queries on infinite streams with numerous unique items. They are probabilistic data structures and allow false positives to avail the compactness. While querying for an item’s membership in the structure, if it returns true, the item might or might not be present in the stream, but a false response guarantees the item's absence. Bloom filters are widely used in real-world applications such as networking, databases, web applications, email spam filtering, biometric systems, security, cloud computing, and distributed systems due to their space-efficient and time-efficient properties. Bloom filters offer several advantages, particularly in storage compression and time-efficient data lookup. Additionally, the use of hashing ensures data security, i.e., if the BF is accessed by an unauthorized entity, no enrolled data can be reversed or traced back to the original content. In summary, BFs are powerful structures for storing data in a storage-efficient approach with low time complexity and high security. However, a disadvantage of the traditional Bloom filters is, they do not support dynamic operations, such as adding or deleting elements. Therefore, in this project, the idea of a Dynamic Bloom Filter has been demonstrated that offers the dynamicity feature that allows the addition or deletion of items. By integrating dynamic capabilities into Standard Bloom filters, their functionality, and robustness are enhanced, making them more suitable for several applications. For example, in a perpetual inventory system, inventory records are constantly updated after every inventory-related transaction, such as sales, purchases, or returns. In banking, dynamic data changes throughout the course of transactions. In the healthcare domain, hospitals can dynamically update and delete patients' medical histories.
Asadullah Khan
A Triad of Approaches for PCB Component Segmentation and Classification using U-Net, SAM, and Detectron2When & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
Hongyang Sun
Abstract
The segmentation and classification of Printed Circuit Board (PCB) components offer multifaceted applications- primarily design validation, assembly verification, quality control optimization, and enhanced recycling processes. However, this field of study presents numerous challenges, mainly stemming from the heterogeneity of PCB component morphology and dimensionality, variations in packaging methodologies for functionally equivalent components, and limitations in the availability of image data.
This study proposes a triad of approaches consisting of two segmentation-based and a classification-based architecture for PCB component detection. The first segmentation approach introduces an enhanced U-Net architecture with a custom loss function for improved multi-scale classification and segmentation accuracy. The second segmentation method leverages transfer learning, utilizing the Segment Anything Model (SAM) developed by Meta’s FAIR lab for both segmentation and classification. Lastly, Detectron2 with a ResNeXt-101 backbone, enhanced by Feature Pyramid Network (FPN), Region Proposal Network (RPN), and Region of Interest (ROI) Align has been proposed for multi-scale detection. The proposed methods are implemented on the FPIC dataset to detect the most commonly appearing components (resistor, capacitor, integrated circuit, LED, and button) in PCB. The first method outperforms existing state-of-the-art networks without pre-training, achieving a DICE score of 94.05%, an IoU score of 91.17%, and an accuracy of 94.90%. On the other hand, the second one surpasses both the previous state-of-the-art network and U-net in segmentation, attaining a DICE score of 97.08%, an IoU score of 93.95%, and an accuracy of 96.34%. Finally, the third one, being the first transfer learning-based approach to perform individual component classification on PCBs, achieves an average precision of 89.88%. Thus, the proposed triad of approaches will play a promising role in enhancing the robustness and accuracy of PCB quality assurance techniques.
Zeyan Liu
On the Security of Modern AI: Backdoors, Robustness, and DetectabilityWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Bo Luo, ChairAlex Bardas
Fengjun Li
Zijun Yao
John Symons
Abstract
The rapid development of AI has significantly impacted security and privacy, introducing both new cyber-attacks targeting AI models and challenges related to responsible use. As AI models become more widely adopted in real-world applications, attackers exploit adversarially altered samples to manipulate their behaviors and decisions. Simultaneously, the use of generative AI, like ChatGPT, has sparked debates about the integrity of AI-generated content.
In this dissertation, we investigate the security of modern AI systems and the detectability of AI-related threats, focusing on stealthy AI attacks and responsible AI use in academia. First, we reevaluate the stealthiness of 20 state-of-the-art attacks on six benchmark datasets, using 24 image quality metrics and over 30,000 user annotations. Our findings reveal that most attacks introduce noticeable perturbations, failing to remain stealthy. Motivated by this, we propose a novel model-poisoning neural Trojan, LoneNeuron, which minimally modifies the host neural network by adding a single neuron after the first convolution layer. LoneNeuron responds to feature-domain patterns that transform into invisible, sample-specific, and polymorphic pixel-domain watermarks, achieving a 100% attack success rate without compromising main task performance and enhancing stealth and detection resistance. Additionally, we examine the detectability of ChatGPT-generated content in academic writing. Presenting GPABench2, a dataset of over 2.8 million abstracts across various disciplines, we assess existing detection tools and challenges faced by over 240 evaluators. We also develop CheckGPT, a detection framework consisting of an attentive Bi-LSTM and a representation module, to capture subtle semantic and linguistic patterns in ChatGPT-generated text. Extensive experiments validate CheckGPT’s high applicability, transferability, and robustness.
Abhishek Doodgaon
Photorealistic Synthetic Data Generation for Deep Learning-based Structural Health Monitoring of Concrete DamsWhen & Where:
LEEP2, Room 1415A
Committee Members:
Zijun Yao, ChairCaroline Bennett
Prasad Kulkarni
Remy Lequesne
Abstract
Regular inspections are crucial for identifying and assessing damage in concrete dams, including a wide range of damage states. Manual inspections of dams are often constrained by cost, time, safety, and inaccessibility. Automating dam inspections using artificial intelligence has the potential to improve the efficiency and accuracy of data analysis. Computer vision and deep learning models have proven effective in detecting a variety of damage features using images, but their success relies on the availability of high-quality and diverse training data. This is because supervised learning, a common machine-learning approach for classification problems, uses labeled examples, in which each training data point includes features (damage images) and a corresponding label (pixel annotation). Unfortunately, public datasets of annotated images of concrete dam surfaces are scarce and inconsistent in quality, quantity, and representation.
To address this challenge, we present a novel approach that involves synthesizing a realistic environment using a 3D model of a dam. By overlaying this model with synthetically created photorealistic damage textures, we can render images to generate large and realistic datasets with high-fidelity annotations. Our pipeline uses NX and Blender for 3D model generation and assembly, Substance 3D Designer and Substance Automation Toolkit for texture synthesis and automation, and Unreal Engine 5 for creating a realistic environment and rendering images. This generated synthetic data is then used to train deep learning models in the subsequent steps. The proposed approach offers several advantages. First, it allows generation of large quantities of data that are essential for training accurate deep learning models. Second, the texture synthesis ensures generation of high-fidelity ground truths (annotations) that are crucial for making accurate detections. Lastly, the automation capabilities of the software applications used in this process provides flexibility to generate data with varied textures elements, colors, lighting conditions, and image quality overcoming the constraints of time. Thus, the proposed approach can improve the automation of dam inspection by improving the quality and quantity of training data.
Sana Awan
Towards Robust and Privacy-preserving Federated LearningWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
Fengjun Li, ChairAlex Bardas
Cuncong Zhong
Mei Liu
Haiyang Chao
Abstract
Machine Learning (ML) has revolutionized various fields, from disease prediction to credit risk evaluation, by harnessing abundant data scattered across diverse sources. However, transporting data to a trusted server for centralized ML model training is not only costly but also raises privacy concerns, particularly with legislative standards like HIPAA in place. In response to these challenges, Federated Learning (FL) has emerged as a promising solution. FL involves training a collaborative model across a network of clients, each retaining its own private data. By conducting training locally on the participating clients, this approach eliminates the need to transfer entire training datasets while harnessing their computation capabilities. However, FL introduces unique privacy risks, security concerns, and robustness challenges. Firstly, FL is susceptible to malicious actors who may tamper with local data, manipulate the local training process, or intercept the shared model or gradients to implant backdoors that affect the robustness of the joint model. Secondly, due to the statistical and system heterogeneity within FL, substantial differences exist between the distribution of each local dataset and the global distribution, causing clients’ local objectives to deviate greatly from the global optima, resulting in a drift in local updates. Addressing such vulnerabilities and challenges is crucial before deploying FL systems in critical infrastructures.
In this dissertation, we present a multi-pronged approach to address the privacy, security, and robustness challenges in FL. This involves designing innovative privacy protection mechanisms and robust aggregation schemes to counter attacks during the training process. To address the privacy risk due to model or gradient interception, we present the design of a reliable and accountable blockchain-enabled privacy-preserving federated learning (PPFL) framework which leverages homomorphic encryption to protect individual client updates. The blockchain is adopted to support provenance of model updates during training so that malformed or malicious updates can be identified and traced back to the source.
We studied the challenges in FL due to heterogeneous data distributions and found that existing FL algorithms often suffer from slow and unstable convergence and are vulnerable to poisoning attacks, particularly in extreme non-independent and identically distributed (non-IID) settings. We propose a robust aggregation scheme, named CONTRA, to mitigate data poisoning attacks and ensure an accuracy guarantee even under attack. This defense strategy identifies malicious clients by evaluating the cosine similarity of their gradient contributions and subsequently removes them from FL training. Finally, we introduce FL-GMM, an algorithm designed to tackle data heterogeneity while prioritizing privacy. It iteratively constructs a personalized classifier for each client while aligning local-global feature representations. By aligning local distributions with global semantic information, FL-GMM minimizes the impact of data diversity. Moreover, FL-GMM enhances security by transmitting derived model parameters via secure multiparty computation, thereby avoiding vulnerabilities to reconstruction attacks observed in other approaches.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Dual-Order Forward PumpingWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Rongqing Hui, ChairChristopher Allen
Morteza Hashemi
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To sustain higher data rates while maximizing the spectral efficiency of multi-level modulated signals, a higher Optical signal-to-noise ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems. Distributed Raman Amplification (DRA) has been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Additionally, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium-doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span. The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the Kerr-effect-induced non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of the system performance in FW DRA systems at the receiver. As the performance of DRA with backward pumping is well understood with a relatively low impact of RIN transfer, our study is focused on the FW pumping scheme. Our research is intended to provide a comprehensive analysis of the system performance impact of dual-order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both the 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual-order FW Raman configurations is compared with that of single-order Raman pumping to understand the trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual-order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.