Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Elizabeth Wyss
A New Frontier for Software Security: Diving Deep into npmWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Drew Davidson, ChairAlex Bardas
Fengjun Li
Bo Luo
J. Walker
Abstract
Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week.
However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.
This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains.
Audrey Mockenhaupt
Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target RecognitionWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairShannon Blunt
Jon Owen
Abstract
Pending.
Rich Simeon
Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry ApplicationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Erik Perrins, ChairShannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin
Abstract
The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.
A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.
Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.
Soumya Baddham
Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content ModerationWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Hongyang Sun
Abstract
With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.
This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle. The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.
The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.
Manu Chaudhary
Utilizing Quantum Computing for Solving Multidimensional Partial Differential EquationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Esam El-Araby, ChairPerry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan
Abstract
Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.
In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.
Alex Manley
Taming Complexity in Computer Architecture through Modern AI-Assisted Design and EducationWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Heechul Yun, ChairTamzidul Hoque
Prasad Kulkarni
Mohammad Alian
Abstract
The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.
The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.
The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.
Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.
Past Defense Notices
Matthew Heintzelman
Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive ProcessingWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Shannon Blunt, ChairChristopher Allen
Patrick McCormick
James Stiles
Zsolt Talata
Abstract
Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to separate and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. As a strategical converse, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar systems that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.
Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation. In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.
Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of Space-Frequency Template Error (SFTE) is implemented on a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on SFTE for narrowband arrays. Finally, optimization via alternating projections is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars using pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived, and experimentally validated, that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. Several modifications to the demonstrated algorithms are proposed to refine implementation, enhance performance, and reflect real-world application to the degree that numerical simulations can.
Anna Fritz
A Formally Verified Infrastructure for Negotiating Remote Attestation ProtocolsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Perry Alexander, ChairAlex Bardas
Drew Davidson
Fengjun Li
Emily Witt
Abstract
Semantic remote attestation is the process of gathering and appraising evidence to establish trust in a remote system. Remote attestation occurs at the request of an appraiser or relying party and proceeds with a target system executing an attestation protocol that invokes attestation services in a specific order to generate and bundle evidence. An appraiser may then evaluate the generated evidence to establish trust in the target's state. In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement operation. To solve this problem, we introduce Attestation Protocol Negotiation or the process of establishing a mutually agreed upon protocol that satisfies the relying party's desire for comprehensive information and the target's desire for constrained disclosure.
This research explores the formal modeling and verification of negotiation, introducing refinement and selection procedures to enable communicating peers to achieve their goals. First, we explore the formalization of refinement or the process by which a target generates executable protocols. Here we focus on a definition of system specifications through manifests, protocol sufficiency and soundness, policy representation, and the negotiation structure. By using our formal models to represent and verify negotiation's properties we can statically determine that a provably sound, sufficient, and executable protocol is produced. Next, we present a formalized model for protocol selection, introducing and proving a preorder over Copland remote attestation protocols to facilitate selection of the most adversary-constrained protocol. With this modeling, we prove selected protocols increase the difficulty of an active adversary. By addressing the target's capability to generate provably executable protocols and the ability to order these protocols, this methodology has the potential to revolutionize the attestation protocol provisioning process.
Arjun Dhage Ramachandra
Implementing object Detection for Real-World ApplicationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Cuncong Zhong
Abstract
The advent of deep learning has enabled the development of powerful AI models that are being used in fields such as medicine, surveillance monitoring, optimizing manufacturing processes, allowing robots to navigate their environment, chatbots, and much more. These applications are only made possible because of the enormous research in the fields of Neural networks and deep learning. In this paper, I’ll be discussing a branch of Neural Networks called Convolution Neural Network (CNN), and how they are used for object detection tasks for detecting and classifying objects in an image. I’ll also discuss a popular object detection framework called Single Shot Multibox Detector (SSD) and implement it in my web application project which allows users to detect objects in images and search for images based on the presence of objects. The main aim of the project was to allow easy access to perform detections with a few clicks.
Kaidong Li
Accurate and Robust Object Detection and Classification Based on Deep Neural NetworksWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Cuncong Zhong, ChairTaejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao
Abstract
Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes. Ultimately, the realms of 2D image classification and 3D point cloud classification represent distinct avenues of research, each relying on significantly different architectures. Given the unique characteristics of these data types, it is not feasible to employ models interchangeably between them.
My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To gap the domain difference in 3D and 2D classification and to increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification. We experimented with various 3D-to-2D mapping algorithm, bridging the gap between 2D and 3D classification. Furthermore, we empirically validate the internal constrained optimization mechanism effectively defend adversarial attacks through implicit gradients.
Andrew Mertz
Multiple Input Single Output (MISO) Receive Processing Techniques for Linear Frequency Modulated Continuous Wave Frequency Diverse Array (LFMCW-FDA) Transmit StructuresWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Patrick McCormick, ChairChris Allen
Shannon Blunt
James Stiles
Abstract
This thesis focuses on the multiple processing techniques that can be applied to a single receive element co-located with a Frequency Diverse Array (FDA) transmission structure that illuminates a large volume to estimate the scattering characteristics of objects within the illuminated space in the range, Doppler, and spatial dimensions. FDA transmissions consist of a number of evenly spaced transmitting elements all of which are radiating a linear frequency modulated (LFM) waveform. The elements are configured into a Uniform Linear Array (ULA) and the waveform of each element is separated by a frequency spacing across the elements where the time duration of the chirp is inversely proportional to an integer multiple of the frequency spacing between elements. The complex transmission structure created by this arrangement of multiple transmitting elements can be received and processed by a single receive element. Furthermore, multiple receive processing techniques, each with their own advantages and disadvantages, can be applied to the data received from the single receive element to estimate the range, velocity, and spatial direction of targets in the illuminated volume relative to the co-located transmit array and receive element. Three different receive processing techniques that can be applied to FDA transmissions are explored. Two of these techniques are novel to this thesis, including the spatial matched filter processing technique for FDA transmission structures, and stretch processing using virtual array processing for FDA transmissions. Additionally, this thesis introduces a new type of FDA transmission structure referred to as ”slow-time” FDA.
Ragib Shakil Rafi
Nonlinearity Assisted Mie Scattering from NanoparticlesWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Alessandro Salandrino , ChairShima Fardad
Morteza Hashemi
Rongqing Hui
Judy Z Wu
Abstract
Scattering by nanoparticles is an exciting branch of physics to control and manipulate light. More specifically, there have been fascinating developments regarding light scattering by sub-wavelength particles, including high-index dielectric and metal particles for their applications in optical resonance phenomena, detecting the fluorescence of molecules, enhancing Raman scattering, transferring the energy to the higher order modes, sensing, and photodetector technologies. This research area has recently gained renewed attention with the study of near-field effects at the nanoscale in advanced regimes of operation, including nonlinear effects and the time-varying parametric modulation of local material properties. When the particle size is comparable to or slightly bigger than the incident wavelength, Mie solutions to Maxwell's equations describe these electromagnetic scattering problems. The addition and excitation of nonlinear effects in these high-indexed sub-wavelength dielectric and plasmonic particles holds promise to improve the existing performance of the system or provide additional features directed toward novel applications. This dissertation explores Mie scattering from dielectric and plasmonic particles in the presence of nonlinear effects, more specifically second and third order nonlinear effects. For numerical analysis, an in-house Rigorous Coupled Analysis (RCWA) method has been developed in a Matlab environment and validated based on designing metasurfaces and comparing them with established results. For dielectrics, this dissertation presents a numerical study of the linear and nonlinear diffraction and focusing properties of dielectric metasurfaces consisting of silicon microcylinder arrays resting on a silicon substrate. Upon diffraction, such structures lead to the formation of near-field intensity profiles reminiscent of photonic nanojets and propagate similarly. The results indicate that the Kerr nonlinear effect i.e. third order nonlinear effect enhances light concentration throughout the generated photonic jet with an increase in the intensity of about 20% compared to the linear regime for the power levels considered in this work. The transverse beamwidth remains subwavelength in all cases, and the nonlinear effect reduces the full width. On the other hand, plasmonic structures give rise to localized surface plasmons and excitations of the conduction electrons within metallic nanostructures. These aren't propagating but instead confined to the vicinity of the nanostructure, interacting with the electromagnetic field. These modes emerge from the scattering between small conductive nanoparticles with an oscillating electromagnetic field. This dissertation introduces a novel mechanism to transfer energy from excited dipolar mode to such higher-order subradiant localized mode. Recent advancements in time-varying structures that help relax photon energy conservation constraints and a newly proposed plasmonic parametric resonance pave the way for this work. With the help of the second-order nonlinear wave mixing process and parametric modulation of the dielectric permittivity in a medium surrounding metal particles, we have introduced a way to accomplish the otherwise nearly impossible task to selectively couple energy into specific high order modes of a nanostructures. This work further shows that the oscillating mode amplitude reaches a steady state, and the steady state establishes the ideal modulation conditions that enhance the amplitude of the high-order mode.
Ben Liu
Computational Microbiome Analysis: Method Development, Integration and Clinical ApplicationsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Cuncong Zhong, ChairEsam El-Araby
Bo Luo
Zijun Yao
Mizuki Azuma
Abstract
Metagenomics is the study of microbial genomes from one common environment. Metagenomic data is directly derived from all microorganisms present in the environmental samples, in- including those inaccessible through conventional methods like laboratory cultures. Thus it offers an unbiased view of microbial communities, enabling researchers to explore not only the taxonomic composition (identifying which microorganisms are present) but also the community’s metabolic functions.
The metagenomic data consists of a huge number of fragmented DNA sequences from diverse microorganisms with different abundance. These characteristics pose challenges to analysis and impede practical applications. Firstly, the development of an efficient detection tool for a specific target from metagenomic data is confronted by the challenge of daunting data size. Secondly, the accuracy of the detection tool is also challenged by the incompleteness of metagenomic data. Thirdly, numerous analysis tools are designed for individual detection targets, and many detection targets are contained within the data, there is a need for comprehensive and scalable integration of existing resources.
In this dissertation, we conducted the computational microbiome analysis at different levels: (1) We first developed an assembly graph-based ncRNA searching tool, named DRAGoM, to im- improve the detection quality in metagenomic data. (2) We then developed an automatic detection model, named SNAIL, to automatically detect names of bioinformatic resources from biomedical literature for comprehensive and scalable organizing resources. We also developed a method to automatically annotate sentences for training SNAIL, which not only benefits the performance of SNAIL but also allows it to be trained on both manual and machine-annotated data, thus minimizing the need for extensive manual data labeling efforts. (3) We applied different analyzing tools to metagenomic datasets from a series of clinical studies and developed models to predict therapeutic benefits from immunotherapy in non-small-cell lung cancer patients using human gut microbiome signatures.
Amin Shojaei
Exploring Cooperative and Robust Multi-Agent Reinforcement Learning in Networked Cyber-Physical Systems: Applications in Smart GridsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairAlex Bardas
Taejoon Kim
Prasad Kulkarni
Shawn Keshmiri
Abstract
Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicle (EV). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize the grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.
As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. Within this context, first, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state. Secondly, we investigate the challenges associated with distributed MARL techniques, with a special focus on the central training distributed execution (CTDE) methods. Throughout this research, we highlight the significance of cooperation in MARL for achieving autonomous control in smart grid systems and other cyber-physical domains. Thirdly, we propose a novel robust MARL framework using a hierarchical structure. We perform an extensive analysis and evaluation of our proposed hierarchical MARL model for large-scale EV networks, thereby addressing the scalability and robustness challenges as the number of agents within a NCPS increases.
Ahmet Soyyigit
Anytime Computing Techniques for Lidar-Based Perception in Cyber-Physical SystemsWhen & Where:
Nichols Hall, Room 317 (Richard K. Moore Conference Room)
Committee Members:
Heechul Yun, ChairMichael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri
Abstract
The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advancements in artificial intelligence (AI), particularly the introduction of deep neural networks (DNNs), have significantly enhanced CPS autonomy, notably boosting perception capabilities.
CPS perception aims to discern, classify, and track the objects of interest in the operational environment, a task considerably challenging for computers in three-dimensional (3D) space. For this task of detecting objects, leveraging lidar sensors and processing their readings with deep neural networks (DNN) has become popular due to their excellent performance.
However, in systems like self-driving cars and drones, object detection must be both accurate and timely, posing a challenge due to the high computational demand of lidar object detection DNNs. Furthermore, lidar object detection DNNs lack the capability to dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). This adaptability is crucial since deadline constraints can change based on the operational environment and the internal status of the system.
Prior research aimed at anytime computing for object detection DNNs using camera images are not applicable when considered to lidar-based detection due to architectural differences. Addressing this challenge, this thesis focuses on proposing novel techniques, such as Anytime-Lidar and VALO (Versatile Anytime Lidar Object Detection). These innovations aim to enable lidar-based object detection DNNs to make effective tradeoffs between latency and accuracy. Finally, the thesis aims to integrate the proposed anytime object detection techniques into unmanned aerial vehicles and introduce a system-level scheduler capable of managing multiple anytime computation capable tasks.
Andrew Mertz
Multiple Input Single Output (MISO) Receive Processing Techniques for Linear Frequency Modulated Continuous Wave Frequency Diverse Array (LFMCW-FDA) Transmit StructuresWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Patrick McCormick, ChairChris Allen
Shannon Blunt
James Stiles
Abstract
This thesis focuses on the multiple processing techniques that can be applied to a single receive element co-located with a Frequency Diverse Array (FDA) transmission structure that illuminates a large volume to estimate the scattering characteristics of objects within the illuminated space in the range, Doppler, and spatial dimensions. FDA transmissions consist of a number of evenly spaced transmitting elements all of which are radiating a linear frequency modulated (LFM) waveform. The elements are configured into a Uniform Linear Array (ULA) and the waveform of each element is separated by a frequency spacing across the elements where the time duration of the chirp is inversely proportional to an integer multiple of the frequency spacing between elements. The complex transmission structure created by this arrangement of multiple transmitting elements can be received and processed by a single receive element. Furthermore, multiple receive processing techniques, each with their own advantages and disadvantages, can be applied to the data received from the single receive element to estimate the range, velocity, and spatial direction of targets in the illuminated volume relative to the co-located transmit array and receive element. Three different receive processing techniques that can be applied to FDA transmissions are explored. Two of these techniques are novel to this thesis, including the spatial matched filter processing technique for FDA transmission structures, and stretch processing using virtual array processing for FDA transmissions. Additionally, this thesis introduces a new type of FDA transmission structure referred to as ”slow-time” FDA.