Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Ye Wang
Deceptive Signals: Unveiling and Countering Sensor Spoofing Attacks on Cyber SystemsWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Fengjun Li, ChairDrew Davidson
Rongqing Hui
Bo Luo
Haiyang Chao
Abstract
In modern computer systems, sensors play a critical role in enabling a wide range of functionalities, from navigation in autonomous vehicles to environmental monitoring in smart homes. Acting as an interface between physical and digital worlds, sensors collect data to drive automated functionalities and decision-making. However, this reliance on sensor data introduces significant potential vulnerabilities, leading to various physical, sensor-enabled attacks such as spoofing, tampering, and signal injection. Sensor spoofing attacks, where adversaries manipulate sensor input or inject false data into target systems, pose serious risks to system security and privacy.
In this work, we have developed two novel sensor spoofing attack methods that significantly enhance both efficacy and practicality. The first method employs physical signals that are imperceptible to humans but detectable by sensors. Specifically, we target deep learning based facial recognition systems using infrared lasers. By leveraging advanced laser modeling, simulation-guided targeting, and real-time physical adjustments, our infrared laser-based physical adversarial attack achieves high success rates with practical real-time guarantees, surpassing the limitations of prior physical perturbation attacks. The second method embeds physical signals, which are inherently present in the system, into legitimate patterns. In particular, we integrate trigger signals into standard operational patterns of actuators on mobile devices to construct remote logic bombs, which are shown to be able to evade all existing detection mechanisms. Achieving a zero false-trigger rate with high success rates, this novel sensor bomb is highly effective and stealthy.
Our study on emerging sensor-based threats highlights the urgent need for comprehensive defenses against sensor spoofing. Along this direction, we design and investigate two defense strategies to mitigate these threats. The first strategy involves filtering out physical signals identified as potential attack vectors. The second strategy is to leverage beneficial physical signals to obfuscate malicious patterns and reinforce data integrity. For example, side channels targeting the same sensor can be used to introduce cover signals that prevent information leakage, while environment-based physical signals serve as signatures to authenticate data. Together, these strategies form a comprehensive defense framework that filters harmful sensor signals and utilizes beneficial ones, significantly enhancing the overall security of cyber systems.
Sravan Reddy Chintareddy
Combating Spectrum Crunch with Efficient Machine-Learning Based Spectrum Access and Harnessing High-frequency Bands for Next-G Wireless NetworksWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairVictor Frost
Erik Perrins
Dongjie Wang
Shawn Keshmiri
Abstract
There is an increasing trend in the number of wireless devices that is now already over 14 billion and is expected to grow to 40 billion devices by 2030. In addition, we are witnessing an unprecedented proliferation of applications and technologies with wireless connectivity requirements such as unmanned aerial vehicles, connected health, and radars for autonomous vehicles. The advent of new wireless technologies and devices will only worsen the current spectrum crunch that service providers and wireless operators are already experiencing. In this PhD study, we address these challenges through the following research thrusts, in which we consider two emerging applications aimed at advancing spectrum efficiency and high-frequency connectivity solutions.
First, we focus on effectively utilizing the existing spectrum resources for emerging applications such as networked UAVs operating within the Unmanned Traffic Management (UTM) system. In this thrust, we develop a coexistence framework for UAVs to share spectrum with traditional cellular networks by using machine learning (ML) techniques so that networked UAVs act as secondary users without interfering with primary users. We propose federated learning (FL) and reinforcement learning (RL) solutions to establish a collaborative spectrum sensing and dynamic spectrum allocation framework for networked UAVs. In the second part, we explore the potential of millimeter-wave (mmWave) and terahertz (THz) frequency bands for high-speed data transmission in urban settings. Specifically, we investigate THz-based midhaul links for 5G networks, where a network's central units (CUs) connect to distributed units (DUs). Through numerical analysis, we assess the feasibility of using 140 GHz links and demonstrate the merits of high-frequency bands to support high data rates in midhaul networks for future urban communications infrastructure. Overall, this research is aimed at establishing frameworks and methodologies that contribute toward the sustainable growth and evolution of wireless connectivity.
Agraj Magotra
Data-Driven Insights into Sustainability: An Artificial Intelligence (AI) Powered Analysis of ESG Practices in the Textile and Apparel IndustryWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairPrasad Kulkarni
Zijun Yao
Abstract
The global textile and apparel (T&A) industry is under growing scrutiny for its substantial environmental and social impact, producing 92 million tons of waste annually and contributing to 20% of global water pollution. In Bangladesh, one of the world's largest apparel exporters, the integration of Environmental, Social, and Governance (ESG) practices is critical to meet international sustainability standards and maintain global competitiveness. This master's study leverages Artificial Intelligence (AI) and Machine Learning (ML) methodologies to comprehensively analyze unstructured corporate data related to ESG practices among LEED-certified Bangladeshi T&A factories.
Our study employs advanced techniques, including Web Scraping, Natural Language Processing (NLP), and Topic Modeling, to extract and analyze sustainability-related information from factory websites. We develop a robust ML framework that utilizes Non-Negative Matrix Factorization (NMF) for topic extraction and a Random Forest classifier for ESG category prediction, achieving an 86% classification accuracy. The study uncovers four key ESG themes: Environmental Sustainability, Social : Workplace Safety and Compliance, Social: Education and Community Programs, and Governance. The analysis reveals that 46% of factories prioritize environmental initiatives, such as energy conservation and waste management, while 44% emphasize social aspects, including workplace safety and education. Governance practices are significantly underrepresented, with only 10% of companies addressing ethical governance, healthcare provisions and employee welfare.
To deepen our understanding of the ESG themes, we conducted a Centrality Analysis to identify the most influential keywords within each category, using measures such as degree, closeness, and eigenvector centrality. Furthermore, our analysis reveals that higher certification levels, like Platinum, are associated with a more balanced emphasis on environmental, social, and governance practices, while lower levels focus primarily on environmental efforts. These insights highlight key areas where the industry can improve and inform targeted strategies for enhancing ESG practices. Overall, this ML framework provides a data-driven, scalable approach for analyzing unstructured corporate data and promoting sustainability in Bangladesh’s T&A sector, offering actionable recommendations for industry stakeholders, policymakers, and global brands committed to responsible sourcing.
Shalmoli Ghosh
High-Power Fabry-Perot Quantum-Well Laser Diodes for Application in Multi-Channel Coherent Optical Communication SystemsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui , ChairShannon Blunt
Jim Stiles
Abstract
Wavelength Division Multiplexing (WDM) is essential for managing rapid network traffic growth in fiber optic systems. Each WDM channel demands a narrow-linewidth, frequency-stabilized laser diode, leading to complexity and increased energy consumption. Multi-wavelength laser sources, generating optical frequency combs (OFC), offer an attractive solution, enabling a single laser diode to provide numerous equally spaced spectral lines for enhanced bandwidth efficiency.
Quantum-dot and quantum-dash OFCs provide phase-synchronized lines with low relative intensity noise (RIN), while Quantum Well (QW) OFCs offer higher power efficiency, but they have higher RIN in the low frequency region of up to 2 GHz. However, both quantum-dot/dash and QW based OFCs, individual spectral lines exhibit high phase noise, limiting coherent detection. Output power levels of these OFCs range between 1-20 mW where the power of each spectral line is typically less than -5 dBm. Due to this requirement, these OFCs require excessive optical amplification, also they possess relatively broad spectral linewidths of each spectral line, due to the inverse relationship between optical power and linewidth as per the Schawlow-Townes formula. This constraint hampers their applicability in coherent detection systems, highlighting a challenge for achieving high-performance optical communication.
In this work, coherent system application of a single-section Quantum-Well Fabry-Perot (FP) laser diode is demonstrated. This laser delivers over 120 mW optical power at the fiber pigtail with a mode spacing of 36.14 GHz. In an experimental setup, 20 spectral lines from a single laser transmitter carry 30 GBaud 16-QAM signals over 78.3 km single-mode fiber, achieving significant data transmission rates. With the potential to support a transmission capacity of 2.15 Tb/s (4.3 Tb/s for dual polarization) per transmitter, including Forward Error Correction (FEC) and maintenance overhead, it offers a promising solution for meeting the escalating demands of modern network traffic efficiently.
Anissa Khan
Privacy Preserving Biometric MatchingWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Perry Alexander, ChairPrasad Kulkarni
Fengjun Li
Abstract
Biometric matching is a process by which distinct features are used to identify an individual. Doing so privately is important because biometric data, such as fingerprints or facial features, is not something that can be easily changed or updated if put at risk. In this study, we perform a piece of the biometric matching process in a privacy preserving manner by using secure multiparty computation (SMPC). Using SMPC allows the identifying biological data, called a template, to remain stored by the data owner during the matching process. This provides security guarantees to the biological data while it is in use and therefore reduces the chances the data is stolen. In this study, we find that performing biometric matching using SMPC is just as accurate as performing the same match in plaintext.
Bryan Richlinski
Prioritize Program Diversity: Enumerative Synthesis with Entropy OrderingWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Sankha Guria, ChairPerry Alexander
Drew Davidson
Jennifer Lohoefener
Abstract
Program synthesis is a popular way to create a correct-by-construction program from a user-provided specification. Term enumeration is a leading technique to systematically explore the space of programs by generating terms from a formal grammar. These terms are treated as candidate programs which are tested/verified against the specification for correctness. In order to prioritize candidates more likely to satisfy the specification, enumeration is often ordered by program size or other domain-specific heuristics. However, domain-specific heuristics require expert knowledge, and enumeration by size often leads to terms comprised of frequently repeating symbols that are less likely to satisfy a specification. In this thesis, we build a heuristic that prioritizes term enumeration based on variability of individual symbols in the program, i.e., information entropy of the program. We use this heuristic to order programs in both top-down and bottom-up enumeration. We evaluated our work on a subset of the PBE-String track of the 2017 SyGuS competition benchmarks and compared against size-based enumeration. In top-down enumeration, our entropy heuristic shortens runtime in ~56% of cases and tests fewer programs in ~80% before finding a valid solution. For bottom-up enumeration, our entropy heuristic improves the number of enumerated programs in ~30% of cases before finding a valid solution, without improving the runtime. Our findings suggest that using entropy to prioritize program enumeration is a promising step forward for faster program synthesis.
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring various facets and phenomena that impact the security of this software supply chain. Such factors include (i) hidden code clones--which obscure provenance and can stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts open-source development practices, and (v) package compromise via malicious updates. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Jagadeesh Sai Dokku
Intelligent Chat Bot for KU Website: Automated Query Response and Resource NavigationWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Hongyang Sun
Abstract
This project introduces an intelligent chatbot designed to improve user experience on our university website by providing instant, automated responses to common inquiries. Navigating a university website can be challenging for students, applicants, and visitors who seek quick information about admissions, campus services, events, and more. To address this challenge, we developed a chatbot that simulates human conversation using Natural Language Processing (NLP), allowing users to find information more efficiently. The chatbot is powered by a Bidirectional Long Short-Term Memory (BiLSTM) model, an architecture well-suited for understanding complex sentence structures. This model captures contextual information from both directions in a sentence, enabling it to identify user intent with high accuracy. We trained the chatbot on a dataset of intent-labeled queries, enabling it to recognize specific intentions such as asking about campus facilities, academic programs, or event schedules. The NLP pipeline includes steps like tokenization, lemmatization, and vectorization. Tokenization and lemmatization prepare the text by breaking it into manageable units and standardizing word forms, making it easier for the model to recognize similar word patterns. The vectorization process then translates this processed text into numerical data that the model can interpret. Flask is used to manage the backend, allowing seamless communication between the user interface and the BiLSTM model. When a user submits a query, Flask routes the input to the model, processes the prediction, and delivers the appropriate response back to the user interface. This chatbot demonstrates a successful application of NLP in creating interactive, efficient, and user-friendly solutions. By automating responses, it reduces reliance on manual support and ensures users can access relevant information at any time. This project highlights how intelligent chatbots can transform the way users interact with university websites, offering a faster and more engaging experience.
Anahita Memar
Optimizing Protein Particle Classification: A Study on Smoothing Techniques and Model PerformanceWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Prasad Kulkarni, ChairHossein Saiedian
Prajna Dhar
Abstract
This thesis investigates the impact of smoothing techniques on enhancing classification accuracy in protein particle datasets, focusing on both binary and multi-class configurations across three datasets. By applying methods including Averaging-Based Smoothing, Moving Average, Exponential Smoothing, Savitzky-Golay, and Kalman Smoothing, we sought to improve performance in Random Forest, Decision Tree, and Neural Network models. Initial baseline accuracies revealed the complexity of multi-class separability, while clustering analyses provided valuable insights into class similarities and distinctions, guiding our interpretation of classification challenges.
These results indicate that Averaging-Based Smoothing and Moving Average techniques are particularly effective in enhancing classification accuracy, especially in configurations with marked differences in surfactant conditions. Feature importance analysis identified critical metrics, such as IntMean and IntMax, which played a significant role in distinguishing classes. Cross-validation validated the robustness of our models, with Random Forest and Neural Network consistently outperforming others in binary tasks and showing promising adaptability in multi-class classification. This study not only highlights the efficacy of smoothing techniques for improving classification in protein particle analysis but also offers a foundational approach for future research in biopharmaceutical data processing and analysis.
Yousif Dafalla
Web-Armour: Mitigating Reconnaissance and Vulnerability Scanning with Injecting Scan-Impeding Delays in Web DeploymentsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Alex Bardas, ChairDrew Davidson
Fengjun Li
Bo Luo
ZJ Wang
Abstract
Scanning hosts on the internet for vulnerable devices and services is a key step in numerous cyberattacks. Previous work has shown that scanning is a widespread phenomenon on the internet and commonly targets web application/server deployments. Given that automated scanning is a crucial step in many cyberattacks, it would be beneficial to make it more difficult for adversaries to perform such activity.
In this work, we propose Web-Armour, a mitigation approach to adversarial reconnaissance and vulnerability scanning of web deployments. The proposed approach relies on injecting scanning impeding delays to infrequently or rarely used portions of a web deployment. Web-Armour has two goals: First, increase the cost for attackers to perform automated reconnaissance and vulnerability scanning; Second, introduce minimal to negligible performance overhead to benign users of the deployment. We evaluate Web-Armour on live environments, operated by real users, and on different controlled (offline) scenarios. We show that Web-Armour can effectively lead to thwarting reconnaissance and internet-wide scanning.
Past Defense Notices
Amalu George
Enhancing the Robustness of Bloom Filters by Introducing DynamicityWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
Sumaiya Shomaji, ChairHongyang Sun
Han Wang
Abstract
A Bloom Filter (BF) is a compact and space-efficient data structure that efficiently handles membership queries on infinite streams with numerous unique items. They are probabilistic data structures and allow false positives to avail the compactness. While querying for an item’s membership in the structure, if it returns true, the item might or might not be present in the stream, but a false response guarantees the item's absence. Bloom filters are widely used in real-world applications such as networking, databases, web applications, email spam filtering, biometric systems, security, cloud computing, and distributed systems due to their space-efficient and time-efficient properties. Bloom filters offer several advantages, particularly in storage compression and time-efficient data lookup. Additionally, the use of hashing ensures data security, i.e., if the BF is accessed by an unauthorized entity, no enrolled data can be reversed or traced back to the original content. In summary, BFs are powerful structures for storing data in a storage-efficient approach with low time complexity and high security. However, a disadvantage of the traditional Bloom filters is, they do not support dynamic operations, such as adding or deleting elements. Therefore, in this project, the idea of a Dynamic Bloom Filter has been demonstrated that offers the dynamicity feature that allows the addition or deletion of items. By integrating dynamic capabilities into Standard Bloom filters, their functionality, and robustness are enhanced, making them more suitable for several applications. For example, in a perpetual inventory system, inventory records are constantly updated after every inventory-related transaction, such as sales, purchases, or returns. In banking, dynamic data changes throughout the course of transactions. In the healthcare domain, hospitals can dynamically update and delete patients' medical histories.
Asadullah Khan
A Triad of Approaches for PCB Component Segmentation and Classification using U-Net, SAM, and Detectron2When & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
Hongyang Sun
Abstract
The segmentation and classification of Printed Circuit Board (PCB) components offer multifaceted applications- primarily design validation, assembly verification, quality control optimization, and enhanced recycling processes. However, this field of study presents numerous challenges, mainly stemming from the heterogeneity of PCB component morphology and dimensionality, variations in packaging methodologies for functionally equivalent components, and limitations in the availability of image data.
This study proposes a triad of approaches consisting of two segmentation-based and a classification-based architecture for PCB component detection. The first segmentation approach introduces an enhanced U-Net architecture with a custom loss function for improved multi-scale classification and segmentation accuracy. The second segmentation method leverages transfer learning, utilizing the Segment Anything Model (SAM) developed by Meta’s FAIR lab for both segmentation and classification. Lastly, Detectron2 with a ResNeXt-101 backbone, enhanced by Feature Pyramid Network (FPN), Region Proposal Network (RPN), and Region of Interest (ROI) Align has been proposed for multi-scale detection. The proposed methods are implemented on the FPIC dataset to detect the most commonly appearing components (resistor, capacitor, integrated circuit, LED, and button) in PCB. The first method outperforms existing state-of-the-art networks without pre-training, achieving a DICE score of 94.05%, an IoU score of 91.17%, and an accuracy of 94.90%. On the other hand, the second one surpasses both the previous state-of-the-art network and U-net in segmentation, attaining a DICE score of 97.08%, an IoU score of 93.95%, and an accuracy of 96.34%. Finally, the third one, being the first transfer learning-based approach to perform individual component classification on PCBs, achieves an average precision of 89.88%. Thus, the proposed triad of approaches will play a promising role in enhancing the robustness and accuracy of PCB quality assurance techniques.
Zeyan Liu
On the Security of Modern AI: Backdoors, Robustness, and DetectabilityWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Bo Luo, ChairAlex Bardas
Fengjun Li
Zijun Yao
John Symons
Abstract
The rapid development of AI has significantly impacted security and privacy, introducing both new cyber-attacks targeting AI models and challenges related to responsible use. As AI models become more widely adopted in real-world applications, attackers exploit adversarially altered samples to manipulate their behaviors and decisions. Simultaneously, the use of generative AI, like ChatGPT, has sparked debates about the integrity of AI-generated content.
In this dissertation, we investigate the security of modern AI systems and the detectability of AI-related threats, focusing on stealthy AI attacks and responsible AI use in academia. First, we reevaluate the stealthiness of 20 state-of-the-art attacks on six benchmark datasets, using 24 image quality metrics and over 30,000 user annotations. Our findings reveal that most attacks introduce noticeable perturbations, failing to remain stealthy. Motivated by this, we propose a novel model-poisoning neural Trojan, LoneNeuron, which minimally modifies the host neural network by adding a single neuron after the first convolution layer. LoneNeuron responds to feature-domain patterns that transform into invisible, sample-specific, and polymorphic pixel-domain watermarks, achieving a 100% attack success rate without compromising main task performance and enhancing stealth and detection resistance. Additionally, we examine the detectability of ChatGPT-generated content in academic writing. Presenting GPABench2, a dataset of over 2.8 million abstracts across various disciplines, we assess existing detection tools and challenges faced by over 240 evaluators. We also develop CheckGPT, a detection framework consisting of an attentive Bi-LSTM and a representation module, to capture subtle semantic and linguistic patterns in ChatGPT-generated text. Extensive experiments validate CheckGPT’s high applicability, transferability, and robustness.
Abhishek Doodgaon
Photorealistic Synthetic Data Generation for Deep Learning-based Structural Health Monitoring of Concrete DamsWhen & Where:
LEEP2, Room 1415A
Committee Members:
Zijun Yao, ChairCaroline Bennett
Prasad Kulkarni
Remy Lequesne
Abstract
Regular inspections are crucial for identifying and assessing damage in concrete dams, including a wide range of damage states. Manual inspections of dams are often constrained by cost, time, safety, and inaccessibility. Automating dam inspections using artificial intelligence has the potential to improve the efficiency and accuracy of data analysis. Computer vision and deep learning models have proven effective in detecting a variety of damage features using images, but their success relies on the availability of high-quality and diverse training data. This is because supervised learning, a common machine-learning approach for classification problems, uses labeled examples, in which each training data point includes features (damage images) and a corresponding label (pixel annotation). Unfortunately, public datasets of annotated images of concrete dam surfaces are scarce and inconsistent in quality, quantity, and representation.
To address this challenge, we present a novel approach that involves synthesizing a realistic environment using a 3D model of a dam. By overlaying this model with synthetically created photorealistic damage textures, we can render images to generate large and realistic datasets with high-fidelity annotations. Our pipeline uses NX and Blender for 3D model generation and assembly, Substance 3D Designer and Substance Automation Toolkit for texture synthesis and automation, and Unreal Engine 5 for creating a realistic environment and rendering images. This generated synthetic data is then used to train deep learning models in the subsequent steps. The proposed approach offers several advantages. First, it allows generation of large quantities of data that are essential for training accurate deep learning models. Second, the texture synthesis ensures generation of high-fidelity ground truths (annotations) that are crucial for making accurate detections. Lastly, the automation capabilities of the software applications used in this process provides flexibility to generate data with varied textures elements, colors, lighting conditions, and image quality overcoming the constraints of time. Thus, the proposed approach can improve the automation of dam inspection by improving the quality and quantity of training data.
Sana Awan
Towards Robust and Privacy-preserving Federated LearningWhen & Where:
Zoom Defense, please email jgrisafe@ku.edu for defense link.
Committee Members:
Fengjun Li, ChairAlex Bardas
Cuncong Zhong
Mei Liu
Haiyang Chao
Abstract
Machine Learning (ML) has revolutionized various fields, from disease prediction to credit risk evaluation, by harnessing abundant data scattered across diverse sources. However, transporting data to a trusted server for centralized ML model training is not only costly but also raises privacy concerns, particularly with legislative standards like HIPAA in place. In response to these challenges, Federated Learning (FL) has emerged as a promising solution. FL involves training a collaborative model across a network of clients, each retaining its own private data. By conducting training locally on the participating clients, this approach eliminates the need to transfer entire training datasets while harnessing their computation capabilities. However, FL introduces unique privacy risks, security concerns, and robustness challenges. Firstly, FL is susceptible to malicious actors who may tamper with local data, manipulate the local training process, or intercept the shared model or gradients to implant backdoors that affect the robustness of the joint model. Secondly, due to the statistical and system heterogeneity within FL, substantial differences exist between the distribution of each local dataset and the global distribution, causing clients’ local objectives to deviate greatly from the global optima, resulting in a drift in local updates. Addressing such vulnerabilities and challenges is crucial before deploying FL systems in critical infrastructures.
In this dissertation, we present a multi-pronged approach to address the privacy, security, and robustness challenges in FL. This involves designing innovative privacy protection mechanisms and robust aggregation schemes to counter attacks during the training process. To address the privacy risk due to model or gradient interception, we present the design of a reliable and accountable blockchain-enabled privacy-preserving federated learning (PPFL) framework which leverages homomorphic encryption to protect individual client updates. The blockchain is adopted to support provenance of model updates during training so that malformed or malicious updates can be identified and traced back to the source.
We studied the challenges in FL due to heterogeneous data distributions and found that existing FL algorithms often suffer from slow and unstable convergence and are vulnerable to poisoning attacks, particularly in extreme non-independent and identically distributed (non-IID) settings. We propose a robust aggregation scheme, named CONTRA, to mitigate data poisoning attacks and ensure an accuracy guarantee even under attack. This defense strategy identifies malicious clients by evaluating the cosine similarity of their gradient contributions and subsequently removes them from FL training. Finally, we introduce FL-GMM, an algorithm designed to tackle data heterogeneity while prioritizing privacy. It iteratively constructs a personalized classifier for each client while aligning local-global feature representations. By aligning local distributions with global semantic information, FL-GMM minimizes the impact of data diversity. Moreover, FL-GMM enhances security by transmitting derived model parameters via secure multiparty computation, thereby avoiding vulnerabilities to reconstruction attacks observed in other approaches.
Arin Dutta
Performance Analysis of Distributed Raman Amplification with Dual-Order Forward PumpingWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Rongqing Hui, ChairChristopher Allen
Morteza Hashemi
Alessandro Salandrino
Hui Zhao
Abstract
As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To sustain higher data rates while maximizing the spectral efficiency of multi-level modulated signals, a higher Optical signal-to-noise ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems. Distributed Raman Amplification (DRA) has been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Additionally, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium-doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span. The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the Kerr-effect-induced non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of the system performance in FW DRA systems at the receiver. As the performance of DRA with backward pumping is well understood with a relatively low impact of RIN transfer, our study is focused on the FW pumping scheme. Our research is intended to provide a comprehensive analysis of the system performance impact of dual-order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both the 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual-order FW Raman configurations is compared with that of single-order Raman pumping to understand the trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual-order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.
Babak Badnava
Joint Communication and Computation for Emerging Applications in Next Generation of Wireless NetworksWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairVictor Frost
Taejoon Kim
Prasad Kulkarni
Shawn Keshmiri
Abstract
Emerging applications in next-generation wireless networks are driving the need for innovative communication and computation systems. Notable examples include augmented and virtual reality (AR/VR), autonomous vehicles, and mobile edge computing, all of which demand significant computational and communication resources at the network edge. These demands place a strain on edge devices, which are often resource-constrained. In order to incorporate available communication and computation resources, while enhancing user experience, this PhD research is dedicated to developing joint communication and computation solutions for next generation wireless applications that could potentially operate in high frequencies such as millimeter wave (mmWave) bands.
In the first thrust of this study, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision problem (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution, where edge users offload their computation tasks to a nearby edge server using a selected communication channel. The proposed framework aims to meet user's requirements, such as task completion deadline and computation throughput (i.e., the rate at which computational results are produced).
The second thrust of the study emphasizes user-driven requirements for these resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of mmWave networks, we develop a beam alignment and buffer predictive multi-user scheduling algorithm for 2D video streaming applications. This scheduling algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across users. Next, we extend our investigation and develop a joint rate adaptation and computation distribution algorithm for 3D video streaming in mmWave-based VR systems. Our proposed framework balances the trade-off between communication and computation resource allocation to enhance the users’ QoE. Our numerical results using real-world mmWave traces and 3D video dataset, show promising improvements in terms of video quality, rebuffering time, and quality variation perceived by users.
Arman Ghasemi
Task-Oriented Communication and Distributed Control in Smart Grids with Time-Series ForecastingWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairAlexandru Bardas
Taejoon Kim
Prasad Kulkarni
Zsolt Talata
Abstract
Smart grids face challenges in maintaining the balance between generation and consumption at the residential and grid scales with the integration of renewable energy resources. Decentralized, dynamic, and distributed control algorithms are necessary for smart grids to function effectively. The inherent variability and uncertainty of renewables, especially wind and solar energy, complicate the deployment of distributed control algorithms in smart grids. In addition, smart grid systems must handle real-time data collected from interconnected devices and sensors while maintaining reliable and secure communication regardless of network failures. To address these challenges, our research models the integration of renewable energy resources into the smart grid and evaluates how predictive analytics can improve distributed control and energy management, while recognizing the limitations of communication channels and networks.
In the first thrust of this research, we develop a model of a smart grid with renewable energy integration and evaluate how forecasting affects distributed control and energy management. In particular, we investigate how contextual weather information and renewable energy time-series forecasting affect smart grid energy management. In addition to modeling the smart grid system and integrating renewable energy resources, we further explore the use of deep learning methods, such as the Long Short-Term Memory (LSTM) and Transformer models, for time-series forecasting. Time-series forecasting techniques are applied within Reinforcement Learning (RL) frameworks to enhance decision-making processes.
In the second thrust, we note that data collection and sharing across the smart grids require considering the impact of network and communication channel limitations in our forecasting models. As renewable energy sources and advanced sensors are integrated into smart grids, communication channels on wireless networks are overflowed with data, requiring a shift from transmitting raw data to processing only useful information to maximize efficiency and reliability. To this end, we develop a task-oriented communication model that integrates data compression and the effects of data packet queuing with considering limitation of communication channels, within a remote time-series forecasting framework. Furthermore, we jointly integrate data compression technique with age of information metric to enhance both relevance and timeliness of data used in time-series forecasting.
Neel Patel
Near-Memory Acceleration of Compressed Far MemoryWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Mohammad Alian, ChairDavid Johnson
Prasad Kulkarni
Abstract
DRAM constitutes over 50% of server cost and 75% of the embodied carbon footprint of a server. To mitigate DRAM cost, far memory architectures have emerged. They can be separated into two broad categories: software-defined far memory (SFM) and disaggregated far memory (DFM). In this work, we compare the cost of SFM and DFM in terms of their required capital investment, operational expense, and carbon footprint. We show that, for applications whose data sets are compressible and have predictable memory access patterns, it takes several years for a DFM to break even with an equivalent capacity SFM in terms of cost and sustainability. We then introduce XFM, a near-memory accelerated SFM architecture, which exploits the coldness of data during SFM-initiated swap ins and outs. XFM leverages refresh cycles to seamlessly switch the access control of DRAM between the CPU and near-memory accelerator. XFM parallelizes near-memory accelerator accesses with row refreshes and removes the memory interference caused by SFM swap ins and outs. We modify an open source far memory implementation to implement a full-stack, user-level XFM. Our experimental results use a combination of an FPGA implementation, simulation, and analytical modeling to show that XFM eliminates memory bandwidth utilization when performing compression and decompression operations with SFMs of capacities up to 1TB. The memory and cache utilization reductions translate to 5∼27% improvement in the combined performance of co-running applications.
Dang Qua Nguyen
Hybrid Precoding Optimization and Private Federated Learning for Future Wireless SystemsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Taejoon Kim, ChairMorteza Hashemi
Erik Perrins
Zijun Yao
KC Kong
Abstract
This PhD research addresses two challenges in future wireless systems: hybrid precoder design for sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and private federated learning (FL) over wireless channels. The first part of the research introduces a novel hybrid precoding framework that combines true-time delay (TTD) and phase shifters (PS) precoders to counteract the beam squint effect - a significant challenge in sub-THz massive MIMO systems that leads to considerable loss in array gain. Our research presents a novel joint optimization framework for the TTD and PS precoder design, incorporating realistic time delay constraints for each TTD device. We first derive a lower bound on the achievable rate of the system and show that, in the asymptotic regime, the optimal analog precoder that fully compensates for the beam squint is equivalent to the one that maximizes this lower bound. Unlike previous methods, our framework does not rely on the unbounded time delay assumption and optimizes the TTD and PS values jointly to cope with the practical limitations. Furthermore, we determine the minimum number of TTD devices needed to reach a target array gain using our proposed approach. Simulations validate that the proposed approach demonstrates performance enhancement, ensures array gain, and achieves computational efficiency. In the second part, the research devises a differentially private FL algorithm that employs time-varying noise perturbation and optimizes transmit power to counteract privacy risks, particularly those stemming from engineering-inversion attacks. This method harnesses inherent wireless channel noise to strike a balance between privacy protection and learning utility. By strategically designing noise perturbation and power control, our approach not only safeguards user privacy but also upholds the quality of the learned FL model. Additionally, the number of FL iterations is optimized by minimizing the upper bound on the learning error. We conduct simulations to showcase the effectiveness of our approach in terms of DP guarantee and learning utility.