Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Sai Narendra Koganti

Real-time Object Detection for Safer Driving Experience in Urban Environment: Leveraging YOLO Algorithm

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Sumaiya Shomaji, Chair
David Johnson
Prasad Kulkarni


Abstract

This project offers a hands-on investigation of object identification utilizing the YOLO method, Python, and OpenCV. It begins by explaining the YOLO architecture, focusing on the single-stage detection process for bounding box prediction and class probability calculation. The setup phase includes library installation and model configuration, resulting in a smooth implementation procedure. Using OpenCV, the project includes preparatory processes required for object detection in images. The YOLO model is seamlessly integrated into the OpenCV framework, enabling object detection. Post-processing techniques, such as non-maximum suppression, are used to modify detection results and improve accuracy. Visualizations, such as bounding boxes and labels, are used to help interpret the discovered items. The project finishes by investigating potential expansions and optimizations, such as custom dataset training and deployment on edge devices, opening up new paths for further investigation and development. This project provides developers with the tools and knowledge they need to build effective object detection systems for a wide range of applications, from surveillance and security to autonomous vehicles and augmented reality.


Ruturaj Vaidya

Exploring binary analysis techniques for security

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Prasad Kulkarni, Chair
Alex Bardas
Drew Davidson
Esam El-Araby
Michael Vitevitch

Abstract

In this dissertation our goal is to evaluate how the loss of information at binary-level affects the performance of existing compiler-level techniques in terms of both efficiency and effectiveness. Binary analysis is difficult, as most of semantic and syntactic information available at source-level gets lost during the compilation process. If the binary is stripped and/ or optimized, then it negatively affects the efficacy of binary analysis frameworks. Moreover, handwritten assembly, obfuscation, excessive indirect calls or jumps, etc. further degrade the accuracy of binary analysis. Challenges to precise binary analysis have implications on the effectiveness, accuracy, and performance, of security and program hardening techniques implemented at the binary level. While these challenges are well-known, their respective impacts on the effectiveness and performance of program hardening techniques are less well-studied.

In this dissertation, we employ classes of defense mechanisms to protect software from the most common software attacks, like buffer overflows and control flow attacks, to determine how this loss of program information at the binary-level affects the effectiveness and performance of defense mechanisms. Additionally, we aim to tackle an important problem of type recovery from binary executables that in turn help bolster the software protection mechanisms.


Wai Ming Chan

A Time-Series Generative Adversarial Network Approach for Improved Soil Inorganic Nitrogen Prediction in Agriculture

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Taejoon Kim, Chair
Zijun Yao
Cuncong Zhong


Abstract

Accurate inference from collected agricultural (AG) data is crucial for optimizing crop production. However, existing methods for soil inorganic nitrogen (IN) level approximation fall short in providing accurate estimations when applied to different production sites. To overcome this challenge, we propose a novel Generative Adversarial Network (GAN) model leveraging a Gated Recurrent Unit (GRU)-based deep learning model, called Agricultural-Predictive GAN (A-PGAN), to predict soil IN from sparse time-series AG data. Our A-PGAN outperforms conventional GAN models, e.g., Wasserstein GAN (WGAN), by augmenting synthesized data sequences to the existing sequences, particularly enhancing generalization performance for out-of-domain data. Additionally, our model demonstrates the flexibility to adapt to varying time intervals and lengths of agronomic features. Simulation results highlight significant improvements in prediction accuracy on both offline simulation data and real AG data. Our proposed model creates new opportunities for the agricultural community to leverage generative deep learning models in synthesizing realistic and out-of-domain data, thereby addressing the challenge of limited AG data and reducing the cost associated with precision agriculture.


Jianpeng Li

BlackLitNetwork: Advancing Black Literature Discovery Through Modern Web Technologies

When & Where:


LEEP2, Room 1420

Committee Members:

Drew Davidson, Chair
Sumaiya Shomaji
Han Wang


Abstract

Advancements in web technologies have significantly expanded access to diverse cultural narratives, yet black literature remains underrepresented in digital domains. The BlackLitNetwork addresses this oversight by harnessing Elasticsearch, MongoDB, React, Python, CSS, HTML, and Node.js, to enhance the discoverability and engagement with black novels. A major component of the platform is a novel generator built with Elasticsearch, which employs powerful full-text search capabilities, essential for users to navigate an extensive literary database effectively.

MongoDB supports the archives platform with a flexible data schema for managing varied literary content efficiently, while Python facilitates robust data cleaning and preprocessing to ensure data integrity and usability. The user interface, created using React, transforms Figma designs from our design team into a dynamic web presence, integrating HTML and CSS to ensure both aesthetic appeal and accessibility.

To further enhance security and manageability, we've implemented a Node.js backend. This layer acts as a middleware, managing and processing requests between our frontend and Elasticsearch. This not only secures our data interactions but also allows for request handling before querying Elasticsearch. This architecture ensures that BlackLitNetwork remains scalable and maintainable.

BlackLitNetwork also features specialized pages for podcasts, briefs, and interactive data visualizations, each designed to highlight historical, and contextual elements of black literature. These components aid in fostering a deeper understanding, establishing BlackLitNetwork as a tool for scholars. This project not only enriches the field of humanities but also promotes a broader understanding of the black literary heritage, making it a resource for researchers, educators, and readers keen on exploring the richness of black literature.


Ethan Grantz

Swarm: A Backend-Agnostic Language for Simple Distributed Programming

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Drew Davidson, Chair
Perry Alexander
Prasad Kulkarni


Abstract

Writing algorithms for a parallel or distributed environment has always been plagued with a variety of challenges, from supervising synchronous reads and writes, to managing job queues and avoiding deadlock. While many languages have libraries or language constructs to mitigate these obstacles, very few attempt to remove those challenges entirely, and even fewer do so while divorcing the means of handling those problems from the means of parallelization or distribution. This project introduces a language called Swarm, which attempts to do just that.

Swarm is a first-class parallel/distributed programming language with modular, swappable parallel drivers. It is intended for everything from multi-threaded local computation on a single machine to large scientific computations split across many nodes in a cluster.

Swarm contains next to no explicit syntax for typical parallel logic, only containing keywords for declaring which variables should reside in shared memory, and describing what code should be parallelized. The remainder of the logic (such as waiting for the results from distributed jobs or locking shared accesses) are added in when compiling to a custom bytecode called Swarm Virtual Instructions (SVI). SVI is then executed by a virtual machine whose parallelization logic is abstracted out, such that the same SVI bytecode can be executed in any parallel/distributed environment.


Johnson Umeike

Optimizing gem5 Simulator Performance: Profiling Insights and Userspace Networking Enhancements

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Mohammad Alian, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Full-system simulation of computer systems is critical for capturing the complex interplay between various hardware and software components in future systems. Modeling the network subsystem is indispensable for the fidelity of full-system simulations due to the increasing importance of scale-out systems. Over the last decade, the network software stack has undergone major changes, with userspace networking stacks and data-plane networks rapidly replacing the conventional kernel network stack. Nevertheless, the current state-of-the-art architectural simulator, gem5, still employs kernel networking, which precludes realistic network application scenarios.

First, we perform a comprehensive profiling study to identify and propose architectural optimizations to accelerate a state-of-the-art architectural simulator. We choose gem5 as the representative architectural simulator, run several simulations with various configurations, perform a detailed architectural analysis of the gem5 source code on different server platforms, tune both system and architectural settings for running simulations, and discuss the future opportunities in accelerating gem5 as an important application. Our detailed profiling of gem5 reveals that its performance is extremely sensitive to the size of the L1 cache. Our experimental results show that a RISC-V core with 32KB data and instruction cache improves gem5’s simulation speed by 31%∼61% compared with a baseline core with 8KB L1 caches. Second, this work extends gem5’s networking capabilities by integrating kernel-bypass/user-space networking based on the DPDK framework, significantly enhancing network throughput and reducing latency. By enabling user-space networking, the simulator achieves a substantial 6.3× improvement in network bandwidth compared to traditional Linux software stacks. Our hardware packet generator model (EtherLoadGen) provides up to a 2.1× speedup in simulation time. Additionally, we develop a suite of networking micro-benchmarks for stress testing the host network stack, allowing for efficient evaluation of gem5’s performance. Through detailed experimental analysis, we characterize the performance differences when running the DPDK network stack on both real systems and gem5, highlighting the sensitivity of DPDK performance to various system and microarchitecture parameters.


Adam Sarhage

Design of Multi-Section Coupled Line Coupler

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Chris Allen
Glenn Prescott


Abstract

Coupled line couplers are used as directional couplers to enable measurement of forward and reverse power in RF transmitters. These measurements provide valuable feedback to the control loops regulating transmitter power output levels. This project seeks to synthesize, simulate, build, and test a broadband, five-stage coupled line coupler with a 20 dB coupling factor. The coupler synthesis is evaluated against ideal coupler components in Keysight ADS.  Fabrication of coupled line couplers is typically accomplished with a stripline topology, but a microstrip topology is additionally evaluated. Measurements from the fabricated coupled line couplers are then compared to the Keysight ADS EM simulations, and some explanations for the differences are provided. Additionally, measurements from a commercially available broadband directional coupler are provided to show what can be accomplished with the right budget.


Mohsen Nayebi Kerdabadi

Contrastive Learning of Temporal Distinctiveness for Survival Analysis in Electronic Health Records

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Zijun Yao, Chair
Fengjun Li
Cuncong Zhong


Abstract

Survival analysis plays a crucial role in many healthcare decisions, where the risk prediction for the events of interest can support an informative outlook for a patient's medical journey. Given the existence of data censoring, an effective way of survival analysis is to enforce the pairwise temporal concordance between censored and observed data, aiming to utilize the time interval before censoring as partially observed time-to-event labels for supervised learning. Although existing studies mostly employed ranking methods to pursue an ordering objective, contrastive methods which learn a discriminative embedding by having data contrast against each other, have not been explored thoroughly for survival analysis. Therefore, we propose a novel Ontology-aware Temporality-based Contrastive Survival (OTCSurv) analysis framework that utilizes survival durations from both censored and observed data to define temporal distinctiveness and construct negative sample pairs with adjustable hardness for contrastive learning. Specifically, we first use an ontological encoder and a sequential self-attention encoder to represent the longitudinal EHR data with rich contexts. Second, we design a temporal contrastive loss to capture varying survival durations in a supervised setting through a hardness-aware negative sampling mechanism. Last, we incorporate the contrastive task into the time-to-event predictive task with multiple loss components. We conduct extensive experiments using a large EHR dataset to forecast the risk of hospitalized patients who are in danger of developing acute kidney injury (AKI), a critical and urgent medical condition. The effectiveness and explainability of the proposed model are validated through comprehensive quantitative and qualitative studies.


Jarrett Zeliff

An Analysis of Bluetooth Mesh Security Features in the Context of Secure Communications

When & Where:


Eaton Hall, Room 1

Committee Members:

Alexandru Bardas, Chair
Drew Davidson
Fengjun Li


Abstract

Significant developments in communication methods to help support at-risk populations have increased over the last 10 years. We view at-risk populations as a group of people present in environments where the use of infrastructure or electricity, including telecommunications, is censored and/or dangerous. Security features that accompany these communication mechanisms are essential to protect the confidentiality of its user base and the integrity and availability of the communication network.

In this work, we look at the feasibility of using Bluetooth Mesh as a communication network and analyze the security features that are inherent to the protocol. Through this analysis we determine the strengths and weaknesses of Bluetooth Mesh security features when used as a messaging medium for at risk populations and provide improvements to current shortcomings. Our analysis includes looking at the Bluetooth Mesh Networking Security Fundamentals as described by the Bluetooth Sig: Encryption and Authentication, Separation of Concerns, Area isolation, Key Refresh, Message Obfuscation, Replay Attack Protection, Trashcan Attack Protection, and Secure Device Provisioning.  We look at how each security feature is implemented and determine if these implementations are sufficient in protecting the users from various attack vectors. For example, we examined the Blue Mirror attack, a reflection attack during the provisioning process which leads to the compromise of network keys, while also assessing the under-researched key refresh mechanism. We propose a mechanism to address Blue-Mirror-oriented attacks with the goal of creating a more secure provisioning process.  To analyze the key refresh mechanism, we implemented our own full-fledged Bluetooth Mesh network and implemented a key refresh mechanism. Through this we form an assessment of the throughput, range, and impacts of a key refresh in both lab and field environments that demonstrate the suitability of our solution as a secure communication method.


Daniel Johnson

Probability-Aware Selective Protection for Sparse Iterative Solvers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Hongyang Sun, Chair
Perry Alexander
Zijun Yao


Abstract

With the increasing scale of high-performance computing (HPC) systems, transient bit-flip errors are now more likely than ever, posing a threat to long-running scientific applications. A substantial portion of these applications involve the simulation of partial differential equations (PDEs) modeling physical processes over discretized spatial and temporal domains, with some requiring the solving of sparse linear systems. While these applications are often paired with system-level application-agnostic resilience techniques such as checkpointing and replication, the utilization of these techniques imposes significant overhead. In this work, we present a probability-aware framework that produces low-overhead selective protection schemes for the widely used Preconditioned Conjugate Gradient (PCG) method, whose performance can heavily degrade due to error propagation through the sparse matrix-vector multiplication (SpMV) operation. Through the use of a straightforward mathematical model and an optimized machine learning model, our selective protection schemes incorporate error probability to protect only certain crucial operations. An experimental evaluation using 15 matrices from the SuiteSparse Matrix Collection demonstrates that our protection schemes effectively reduce resilience overheads, often outperforming or matching both baseline and established protection schemes across all error probabilities.


Javaria Ahmad

Discovering Privacy Compliance Issues in IoT Apps and Alexa Skills Using AI and Presenting a Mechanism for Enforcing Privacy Compliance

When & Where:


LEEP2, Room 2425

Committee Members:

Bo Luo, Chair
Alex Bardas
Tamzidul Hoque
Fengjun Li
Michael Zhuo Wang

Abstract

The growth of IoT and voice assistant (VA) apps poses increasing concerns about sensitive data leaks. While privacy policies are required to describe how these apps use private user data (i.e., data practice), problems such as missing, inaccurate, and inconsistent policies have been repeatedly reported. Therefore, it is important to assess the actual data practice in apps and identify the potential gaps between the actual and declared data usage. We find that app stores lack in regulating the compliance between the app practices and their declaration, so we use AI to discover the compliance issues in these apps to assist the regulators and developers. For VA apps, we also develop a mechanism to enforce the compliance using AI. In this work, we conduct a measurement study using our framework called IoTPrivComp, which applies an automated analysis of IoT apps’ code and privacy policies to identify compliance gaps. We collect 1,489 IoT apps with English privacy policies from the Play Store. IoTPrivComp detects 532 apps with sensitive external data flows, among which 408 (76.7%) apps have undisclosed data leaks. Moreover, 63.4% of the data flows that involve health and wellness data are inconsistent with the practices disclosed in the apps’ privacy policies. Next, we focus on the compliance issues in skills. VAs, such as Amazon Alexa, are integrated with numerous devices in homes and cars to process user requests using apps called skills. With their growing popularity, VAs also pose serious privacy concerns. Sensitive user data captured by VAs may be transmitted to third-party skills without users’ consent or knowledge about how their data is processed. Privacy policies are a standard medium to inform the users of the data practices performed by the skills. However, privacy policy compliance verification of such skills is challenging, since the source code is controlled by the skill developers, who can make arbitrary changes to the behaviors of the skill without being audited; hence, conventional defense mechanisms using static/dynamic code analysis can be easily escaped. We present Eunomia, the first real-time privacy compliance firewall for Alexa Skills. As the skills interact with the users, Eunomia monitors their actions by hijacking and examining the communications from the skills to the users, and validates them against the published privacy policies that are parsed using a BERT-based policy analysis module. When non-compliant skill behaviors are detected, Eunomia stops the interaction and warns the user. We evaluate Eunomia with 55,898 skills on Amazon skills store to demonstrate its effectiveness and to provide a privacy compliance landscape of Alexa skills.


Xiangyu Chen

Toward Efficient Deep Learning for Computer Vision Applications

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Fengjun Li
Hongguo Xu

Abstract

Deep learning leads the performance in many areas of computer vision. However, after a decade of research, it tends to require larger datasets and more complex models, leading to heightened resource consumption across all fronts. Regrettably, meeting these requirements proves challenging in many real-life scenarios. First, both data collection and labeling processes entail substantial labor and time investments. This challenge becomes especially pronounced in domains such as medicine, where identifying rare diseases demands meticulous data curation. Secondly, the large size of state-of-the-art models, such as ViT, Stable Diffusion, and ConvNext, hinders their deployment on resource-constrained platforms like mobile devices. Research indicates pervasive redundancies within current neural network structures, exacerbating the issue. Lastly, even with ample datasets and optimized models, the time required for training and inference remains prohibitive in certain contexts. Consequently, there is a burgeoning interest among researchers in exploring avenues for efficient artificial intelligence.

This study endeavors to delve into various facets of efficiency within computer vision, including data efficiency, model efficiency, as well as training and inference efficiency. The data efficiency is improved from the perspective of increasing information brought by given image inputs and reducing redundancies of RGB image formats. To achieve this, we propose to integrate both spatial and frequency representations to finetune the classifier. Additionally, we propose explicitly increasing the input information density in the frequency domain by deleting unimportant frequency channels. For model efficiency, we scrutinize the redundancies present in widely used vision transformers. Our investigation reveals that trivial attention in their attention modules covers useful non-trivial attention due to its large amount. We propose mitigating the impact of accumulated trivial attention weights. To increase training efficiency, we propose SuperLoRA, a generation of LoRA adapter, to fine-tune pretrained models with few iterations and extremely-low parameters. Finally, a model simplification pipeline is proposed to further reduce inference time on mobile devices. By addressing these challenges, we aim to advance the practicality and performance of computer vision systems in real-world applications.


Past Defense Notices

Dates

Krushi Patel

Image Classification & Segmentation based on Enhanced CNN and Transformer Networks

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Fengjun Li, Chair
Prasad Kulkarni
Bo Luo
Cuncong Zhong
Xinmai Yang

Abstract

Convolutional Neural Networks (CNNs) have significantly enhanced performance across various computer vision tasks such as image recognition and segmentation, owing to their robust representation capabilities. To further boost CNN performance, a self-attention module is integrated after each network layer. Transformer-based models, which leverage a multi-head self-attention module as their core component, have recently demonstrated outstanding performance. However, several challenges persist, including the limitation to class-specific channels in CNNs, the constrained receptive field in local transformers, and the incorporation of redundant features and the absence of multi-scale features in U-Net type segmentation architectures.

In our study, we propose new strategies to tackle these challenges. (1) We propose a novel channel-based self-attention module to diversify the focus more on the discriminative and significant channels, and the module can be embedded at the end of any backbone network for image classification. (2) To mitigate noise introduced by shallow encoder layers in U-Net architectures, we substitute skip connections with an Adaptive Global Context Module (AGCM). Additionally, we introduce the Semantic Feature Enhancement Module (SFEM) to enhance multi-scale features in polyp segmentation. (3) We introduce a Multi-scaled Overlapped Attention (MOA) mechanism within local transformer-based networks for image classification, facilitating the establishment of long-range dependencies and initiation of neighborhood window communication. (4) We propose a pioneering Fuzzy Attention Module designed to prioritize challenging pixels, thereby augmenting polyp segmentation performance. (5) We develop a novel dense attention gate module that aggregates features from all preceding layers to compute attention scores, refining global features in polyp segmentation tasks. Moreover, we design a new multi-layer horizontally extended decoder architecture to enhance local feature refinement in polyp segmentation.


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to separate and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. As a strategical converse, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar systems that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of Space-Frequency Template Error (SFTE) is implemented on a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on SFTE for narrowband arrays. Finally, optimization via alternating projections is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars using pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived, and experimentally validated, that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. Several modifications to the demonstrated algorithms are proposed to refine implementation, enhance performance, and reflect real-world application to the degree that numerical simulations can.


Anna Fritz

A Formally Verified Infrastructure for Negotiating Remote Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Fengjun Li
Emily Witt

Abstract

Semantic remote attestation is the process of gathering and appraising evidence to establish trust in a remote system. Remote attestation occurs at the request of an appraiser or relying party and proceeds with a target system executing an attestation protocol that invokes attestation services in a specific order to generate and bundle evidence. An appraiser may then evaluate the generated evidence to establish trust in the target's state.  In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement operation. To solve this problem, we introduce Attestation Protocol Negotiation or the process of establishing a mutually agreed upon protocol that satisfies the relying party's desire for comprehensive information and the target's desire for constrained disclosure.

    This research explores the formal modeling and verification of negotiation, introducing refinement and selection procedures to enable communicating peers to achieve their goals. First, we explore the formalization of refinement or the process by which a target generates executable protocols. Here we focus on a definition of system specifications through manifests, protocol sufficiency and soundness, policy representation, and the negotiation structure. By using our formal models to represent and verify negotiation's properties we can statically determine that a provably sound, sufficient, and executable protocol is produced. Next, we present a formalized model for protocol selection, introducing and proving a preorder over Copland remote attestation protocols to facilitate selection of the most adversary-constrained protocol. With this modeling, we prove selected protocols increase the difficulty of an active adversary. By addressing the target's capability to generate provably executable protocols and the ability to order these protocols, this methodology has the potential to revolutionize the attestation protocol provisioning process.


Arjun Dhage Ramachandra

Implementing object Detection for Real-World Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Cuncong Zhong


Abstract

 The advent of deep learning has enabled the development of powerful AI models that are being used in fields such as medicine, surveillance monitoring, optimizing manufacturing processes, allowing robots to navigate their environment, chatbots, and much more. These applications are only made possible because of the enormous research in the fields of Neural networks and deep learning. In this paper, I’ll be discussing a branch of Neural Networks called Convolution Neural Network (CNN), and how they are used for object detection tasks for detecting and classifying objects in an image. I’ll also discuss a popular object detection framework called Single Shot Multibox Detector (SSD) and implement it in my web application project which allows users to detect objects in images and search for images based on the presence of objects. The main aim of the project was to allow easy access to perform detections with a few clicks. 


Kaidong Li

Accurate and Robust Object Detection and Classification Based on Deep Neural Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Taejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao

Abstract

Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes. Ultimately, the realms of 2D image classification and 3D point cloud classification represent distinct avenues of research, each relying on significantly different architectures. Given the unique characteristics of these data types, it is not feasible to employ models interchangeably between them.

My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To gap the domain difference in 3D and 2D classification and to increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification. We experimented with various 3D-to-2D mapping algorithm, bridging the gap between 2D and 3D classification. Furthermore, we empirically validate the internal constrained optimization mechanism effectively defend adversarial attacks through implicit gradients.


Andrew Mertz

Multiple Input Single Output (MISO) Receive Processing Techniques for Linear Frequency Modulated Continuous Wave Frequency Diverse Array (LFMCW-FDA) Transmit Structures

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Patrick McCormick, Chair
Chris Allen
Shannon Blunt
James Stiles

Abstract

This thesis focuses on the multiple processing techniques that can be applied to a single receive element co-located with a Frequency Diverse Array (FDA) transmission structure that illuminates a large volume to estimate the scattering characteristics of objects within the illuminated space in the range, Doppler, and spatial dimensions. FDA transmissions consist of a number of evenly spaced transmitting elements all of which are radiating a linear frequency modulated (LFM) waveform. The elements are configured into a Uniform Linear Array (ULA) and the waveform of each element is separated by a frequency spacing across the elements where the time duration of the chirp is inversely proportional to an integer multiple of the frequency spacing between elements. The complex transmission structure created by this arrangement of multiple transmitting elements can be received and processed by a single receive element. Furthermore, multiple receive processing techniques, each with their own advantages and disadvantages, can be applied to the data received from the single receive element to estimate the range, velocity, and spatial direction of targets in the illuminated volume relative to the co-located transmit array and receive element. Three different receive processing techniques that can be applied to FDA transmissions are explored. Two of these techniques are novel to this thesis, including the spatial matched filter processing technique for FDA transmission structures, and stretch processing using virtual array processing for FDA transmissions. Additionally, this thesis introduces a new type of FDA transmission structure referred to as ”slow-time” FDA.


Ragib Shakil Rafi

Nonlinearity Assisted Mie Scattering from Nanoparticles

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Alessandro Salandrino , Chair
Shima Fardad
Morteza Hashemi
Rongqing Hui
Judy Z Wu

Abstract

Scattering by nanoparticles is an exciting branch of physics to control and manipulate light. More specifically, there have been fascinating developments regarding light scattering by sub-wavelength particles, including high-index dielectric and metal particles for their applications in optical resonance phenomena, detecting the fluorescence of molecules, enhancing Raman scattering, transferring the energy to the higher order modes, sensing, and photodetector technologies. This research area has recently gained renewed attention with the study of near-field effects at the nanoscale in advanced regimes of operation, including nonlinear effects and the time-varying parametric modulation of local material properties. When the particle size is comparable to or slightly bigger than the incident wavelength, Mie solutions to Maxwell's equations describe these electromagnetic scattering problems. The addition and excitation of nonlinear effects in these high-indexed sub-wavelength dielectric and plasmonic particles holds promise to improve the existing performance of the system or provide additional features directed toward novel applications. This dissertation explores Mie scattering from dielectric and plasmonic particles in the presence of nonlinear effects, more specifically second and third order nonlinear effects. For numerical analysis, an in-house Rigorous Coupled Analysis (RCWA) method has been developed in a Matlab environment and validated based on designing metasurfaces and comparing them with established results. For dielectrics, this dissertation presents a numerical study of the linear and nonlinear diffraction and focusing properties of dielectric metasurfaces consisting of silicon microcylinder arrays resting on a silicon substrate. Upon diffraction, such structures lead to the formation of near-field intensity profiles reminiscent of photonic nanojets and propagate similarly. The results indicate that the Kerr nonlinear effect i.e. third order nonlinear effect enhances light concentration throughout the generated photonic jet with an increase in the intensity of about 20% compared to the linear regime for the power levels considered in this work. The transverse beamwidth remains subwavelength in all cases, and the nonlinear effect reduces the full width. On the other hand, plasmonic structures give rise to localized surface plasmons and excitations of the conduction electrons within metallic nanostructures. These aren't propagating but instead confined to the vicinity of the nanostructure, interacting with the electromagnetic field. These modes emerge from the scattering between small conductive nanoparticles with an oscillating electromagnetic field. This dissertation introduces a novel mechanism to transfer energy from excited dipolar mode to such higher-order subradiant localized mode. Recent advancements in time-varying structures that help relax photon energy conservation constraints and a newly proposed plasmonic parametric resonance pave the way for this work. With the help of the second-order nonlinear wave mixing process and parametric modulation of the dielectric permittivity in a medium surrounding metal particles, we have introduced a way to accomplish the otherwise nearly impossible task to selectively couple energy into specific high order modes of a nanostructures. This work further shows that the oscillating mode amplitude reaches a steady state, and the steady state establishes the ideal modulation conditions that enhance the amplitude of the high-order mode.


Ben Liu

Computational Microbiome Analysis: Method Development, Integration and Clinical Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Cuncong Zhong, Chair
Esam El-Araby
Bo Luo
Zijun Yao
Mizuki Azuma

Abstract

Metagenomics is the study of microbial genomes from one common environment. Metagenomic data is directly derived from all microorganisms present in the environmental samples, in- including those inaccessible through conventional methods like laboratory cultures. Thus it offers an unbiased view of microbial communities, enabling researchers to explore not only the taxonomic composition (identifying which microorganisms are present) but also the community’s metabolic functions.

The metagenomic data consists of a huge number of fragmented DNA sequences from diverse microorganisms with different abundance. These characteristics pose challenges to analysis and impede practical applications. Firstly, the development of an efficient detection tool for a specific target from metagenomic data is confronted by the challenge of daunting data size. Secondly, the accuracy of the detection tool is also challenged by the incompleteness of metagenomic data. Thirdly, numerous analysis tools are designed for individual detection targets, and many detection targets are contained within the data, there is a need for comprehensive and scalable integration of existing resources.

In this dissertation, we conducted the computational microbiome analysis at different levels: (1) We first developed an assembly graph-based ncRNA searching tool, named DRAGoM, to im- improve the detection quality in metagenomic data. (2) We then developed an automatic detection model, named SNAIL, to automatically detect names of bioinformatic resources from biomedical literature for comprehensive and scalable organizing resources. We also developed a method to automatically annotate sentences for training SNAIL, which not only benefits the performance of SNAIL but also allows it to be trained on both manual and machine-annotated data, thus minimizing the need for extensive manual data labeling efforts. (3) We applied different analyzing tools to metagenomic datasets from a series of clinical studies and developed models to predict therapeutic benefits from immunotherapy in non-small-cell lung cancer patients using human gut microbiome signatures.


Amin Shojaei

Exploring Cooperative and Robust Multi-Agent Reinforcement Learning in Networked Cyber-Physical Systems: Applications in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alex Bardas
Taejoon Kim
Prasad Kulkarni
Shawn Keshmiri

Abstract

Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicle (EV). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize the grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.

As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. Within this context, first, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state. Secondly, we investigate the challenges associated with distributed MARL techniques, with a special focus on the central training distributed execution (CTDE) methods. Throughout this research, we highlight the significance of cooperation in MARL for achieving autonomous control in smart grid systems and other cyber-physical domains. Thirdly, we propose a novel robust MARL framework using a hierarchical structure. We perform an extensive analysis and evaluation of our proposed hierarchical MARL model for large-scale EV networks, thereby addressing the scalability and robustness challenges as the number of agents within a NCPS increases.


Ahmet Soyyigit

Anytime Computing Techniques for Lidar-Based Perception in Cyber-Physical Systems

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Heechul Yun, Chair
Michael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri

Abstract

The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advancements in artificial intelligence (AI), particularly the introduction of deep neural networks (DNNs), have significantly enhanced CPS autonomy, notably boosting perception capabilities. 

CPS perception aims to discern, classify, and track the objects of interest in the operational environment, a task considerably challenging for computers in three-dimensional (3D) space. For this task of detecting objects, leveraging lidar sensors and processing their readings with deep neural networks (DNN) has become popular due to their excellent performance. 

However, in systems like self-driving cars and drones, object detection must be both accurate and timely, posing a challenge due to the high computational demand of lidar object detection DNNs. Furthermore, lidar object detection DNNs lack the capability to dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). This adaptability is crucial since deadline constraints can change based on the operational environment and the internal status of the system.  

Prior research aimed at anytime computing for object detection DNNs using camera images are not applicable when considered to lidar-based detection due to architectural differences. Addressing this challenge, this thesis focuses on proposing novel techniques, such as Anytime-Lidar and VALO (Versatile Anytime Lidar Object Detection). These innovations aim to enable lidar-based object detection DNNs to make effective tradeoffs between latency and accuracy. Finally, the thesis aims to integrate the proposed anytime object detection techniques into unmanned aerial vehicles and introduce a system-level scheduler capable of managing multiple anytime computation capable tasks.