Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Masoud Ghazikor

Distributed Optimization and Control Algorithms for UAV Networks in Unlicensed Spectrum Bands

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni


Abstract

UAVs have emerged as a transformative technology for various applications, including emergency services, delivery, and video streaming. Among these, video streaming services in areas with limited physical infrastructure, such as disaster-affected areas, play a crucial role in public safety. UAVs can be rapidly deployed in search and rescue operations to efficiently cover large areas and provide live video feeds, enabling quick decision-making and resource allocation strategies. However, ensuring reliable and robust UAV communication in such scenarios is challenging, particularly in unlicensed spectrum bands, where interference from other nodes is a significant concern. To address this issue, developing a distributed transmission control and video streaming is essential to maintaining a high quality of service, especially for UAV networks that rely on delay-sensitive data.

In this MSc thesis, we study the problem of distributed transmission control and video streaming optimization for UAVs operating in unlicensed spectrum bands. We develop a cross-layer framework that jointly considers three inter-dependent factors: (i) in-band interference introduced by ground-aerial nodes at the physical layer, (ii) limited-size queues with delay-constrained packet arrival at the MAC layer, and (iii) video encoding rate at the application layer. This framework is designed to optimize the average throughput and PSNR by adjusting fading thresholds and video encoding rates for an integrated aerial-ground network in unlicensed spectrum bands. Using consensus-based distributed algorithm and coordinate descent optimization, we develop two algorithms: (i) Distributed Transmission Control (DTC) that dynamically adjusts fading thresholds to maximize the average throughput by mitigating trade-offs between low-SINR transmission errors and queue packet losses, and (ii) Joint Distributed Video Transmission and Encoder Control (JDVT-EC) that optimally balances packet loss probabilities and video distortions by jointly adjusting fading thresholds and video encoding rates. Through extensive numerical analysis, we demonstrate the efficacy of the proposed algorithms under various scenarios.


Ganesh Nurukurti

Customer Behavior Analytics and Recommendation System for E-Commerce

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

In the era of digital commerce, personalized recommendations are pivotal for enhancing user experience and boosting engagement. This project presents a comprehensive recommendation system integrated into an e-commerce web application, designed using Flask and powered by collaborative filtering via Singular Value Decomposition (SVD). The system intelligently predicts and personalizes product suggestions for users based on implicit feedback such as purchases, cart additions, and search behavior.

 

The foundation of the recommendation engine is built on user-item interaction data, derived from the Brazilian e-commerce Olist dataset. Ratings are simulated using weighted scores for purchases and cart additions, reflecting varying degrees of user intent. These interactions are transformed into a user-product matrix and decomposed using SVD, yielding latent user and product features. The model leverages these latent factors to predict user interest in unseen products, enabling precise and scalable recommendation generation.

 

To further enhance personalization, the system incorporates real-time user activity. Recent search history is stored in an SQLite database and used to prioritize recommendations that align with the user’s current interests. A diversity constraint is also applied to avoid redundancy, limiting the number of recommended products per category.

 

The web application supports robust user authentication, product exploration by category, cart management, and checkout simulations. It features a visually driven interface with dynamic visualizations for product insights and user interactions. The home page adapts to individual preferences, showing tailored product recommendations and enabling users to explore categories and details.

 

In summary, this project demonstrates the practical implementation of a hybrid recommendation strategy combining matrix factorization with contextual user behavior. It showcases the importance of latent factor modeling, data preprocessing, and user-centric design in delivering an intelligent retail experience.


Srijanya Chetikaneni

Plant Disease Prediction Using Transfer Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

Timely detection of plant diseases is critical to safeguarding crop yields and ensuring global food security. This project presents a deep learning-based image classification system to identify plant diseases using the publicly available PlantVillage dataset. The core objective was to evaluate and compare the performance of a custom-built Convolutional Neural Network (CNN) with two widely used transfer learning models—EfficientNetB0 and MobileNetV3Small. 

All models were trained on augmented image data resized to 224×224 pixels, with preprocessing tailored to each architecture. The custom CNN used simple normalization, whereas EfficientNetB0 and MobileNetV3Small utilized their respective pre-processing methods to standardize the pretrained ImageNet domain inputs. To improve robustness, the training pipeline included data augmentation, class weighting, and early stopping.

Training was conducted using the Adam optimizer and categorical cross-entropy loss over 30 epochs, with performance assessed using accuracy, loss, and training time metrics. The results revealed that transfer learning models significantly outperformed the custom CNN. EfficientNetB0 achieved the highest accuracy, making it ideal for high-precision applications, while MobileNetV3Small offered a favorable balance between speed and accuracy, making it suitable for lightweight, real-time inference on edge devices.

This study validates the effectiveness of transfer learning for plant disease detection tasks and emphasizes the importance of model-specific preprocessing and training strategies. It provides a foundation for deploying intelligent plant health monitoring systems in practical agricultural environments.


Ahmet Soyyigit

Anytime Computing Techniques for LiDAR-based Perception In Cyber-Physical Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Michael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri

Abstract

The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advances in artificial intelligence (AI), particularly the introduction of deep neural networks (DNN), have significantly improved the autonomy of CPS, notably by boosting perception capabilities.

CPS perception aims to discern, classify, and track objects of interest in the operational environment, a task that is considerably challenging for computers in a three-dimensional (3D) space. For this task, the use of LiDAR sensors and processing their readings with DNNs has become popular because of their excellent performance However, in CPS such as self-driving cars and drones, object detection must be not only accurate but also timely, posing a challenge due to the high computational demand of LiDAR object detection DNNs. Satisfying this demand is particularly challenging for on-board computational platforms due to size, weight, and power constraints. Therefore, a trade-off between accuracy and latency must be made to ensure that both requirements are satisfied. Importantly, the required trade-off is operational environment dependent and should be weighted more on accuracy or latency dynamically at runtime. However, LiDAR object detection DNNs cannot dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). Prior research aimed at anytime computing for object detection DNNs using camera images is not applicable to LiDAR-based detection due to architectural differences. This thesis addresses these challenges by proposing three novel techniques: Anytime-LiDAR, which enables early termination with reasonable accuracy; VALO (Versatile Anytime LiDAR Object Detection), which implements deadline-aware input data scheduling; and MURAL (Multi-Resolution Anytime Framework for LiDAR Object Detection), which introduces dynamic resolution scaling. Together, these innovations enable LiDAR-based object detection DNNs to make effective trade-offs between latency and accuracy under varying operational conditions, advancing the practical deployment of LiDAR object detection DNNs.


Rahul Purswani

Finetuning Llama on custom data for QA tasks

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Drew Davidson
Prasad Kulkarni


Abstract

Fine-tuning large language models (LLMs) for domain-specific use cases, such as question answering, offers valuable insights into how their performance can be tailored to specialized information needs. In this project, we focused on the University of Kansas (KU) as our target domain. We began by scraping structured and unstructured content from official KU webpages, covering a wide array of student-facing topics including campus resources, academic policies, and support services. From this content, we generated a diverse set of question-answer pairs to form a high-quality training dataset. LLaMA 3.2 was then fine-tuned on this dataset to improve its ability to answer KU-specific queries with greater relevance and accuracy. Our evaluation revealed mixed results—while the fine-tuned model outperformed the base model on most domain-specific questions, the original model still had an edge in handling ambiguous or out-of-scope prompts. These findings highlight the strengths and limitations of domain-specific fine-tuning, and provide practical takeaways for customizing LLMs for real-world QA applications.


Rithvij Pasupuleti

A Machine Learning Framework for Identifying Bioinformatics Tools and Database Names in Scientific Literature

When & Where:


LEEP2, Room 2133

Committee Members:

Cuncong Zhong, Chair
Dongjie Wang
Han Wang
Zijun Yao

Abstract

The absence of a single, comprehensive database or repository cataloging all bioinformatics databases and software creates a significant barrier for researchers aiming to construct computational workflows. These workflows, which often integrate 10–15 specialized tools for tasks such as sequence alignment, variant calling, functional annotation, and data visualization, require researchers to explore diverse scientific literature to identify relevant resources. This process demands substantial expertise to evaluate the suitability of each tool for specific biological analyses, alongside considerable time to understand their applicability, compatibility, and implementation within a cohesive pipeline. The lack of a central, updated source leads to inefficiencies and the risk of using outdated tools, which can affect research quality and reproducibility. Consequently, there is a critical need for an automated, accurate tool to identify bioinformatics databases and software mentions directly from scientific texts, streamlining workflow development and enhancing research productivity. 

 

The bioNerDS system, a prior effort to address this challenge, uses a rule-based named entity recognition (NER) approach, achieving an F1 score of 63% on an evaluation set of 25 articles from BMC Bioinformatics and PLoS Computational Biology. By integrating the same set of features such as context patterns, word characteristics and dictionary matches into a machine learning model, we developed an approach using an XGBoost classifier. This model, carefully tuned to address the extreme class imbalance inherent in NER tasks through synthetic oversampling and refined via systematic hyperparameter optimization to balance precision and recall, excels at capturing complex linguistic patterns and non-linear relationships, ensuring robust generalization. It achieves an F1 score of 82% on the same evaluation set, significantly surpassing the baseline. By combining rule-based precision with machine learning adaptability, this approach enhances accuracy, reduces ambiguities, and provides a robust tool for large-scale bioinformatics resource identification, facilitating efficient workflow construction. Furthermore, this methodology holds potential for extension to other technological domains, enabling similar resource identification in fields like data science, artificial intelligence, or computational engineering.


Vishnu Chowdary Madhavarapu

Automated Weather Classification Using Transfer Learning

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

This project presents an automated weather classification system utilizing transfer learning with pre-trained convolutional neural networks (CNNs) such as VGG19, InceptionV3, and ResNet50. Designed to classify weather conditions—sunny, cloudy, rainy, and sunrise—from images, the system addresses the challenge of limited labeled data by applying data augmentation techniques like zoom, shear, and flip, expanding the dataset images. By fine-tuning the final layers of pre-trained models, the solution achieves high accuracy while significantly reducing training time. VGG19 was selected as the baseline model for its simplicity, strong feature extraction capabilities, and widespread applicability in transfer learning scenarios. The system was trained using the Adam optimizer and evaluated on key performance metrics including accuracy, precision, recall, and F1 score. To enhance user accessibility, a Flask-based web interface was developed, allowing real-time image uploads and instant weather classification. The results demonstrate that transfer learning, combined with robust data preprocessing and fine-tuning, can produce a lightweight and accurate weather classification tool. This project contributes toward scalable, real-time weather recognition systems that can integrate into IoT applications, smart agriculture, and environmental monitoring.


RokunuzJahan Rudro

Using Machine Learning to Classify Driver Behavior from Psychological Features: An Exploratory Study

When & Where:


Eaton Hall, Room 1A

Committee Members:

Sumaiya Shomaji, Chair
David Johnson
Zijun Yao
Alexandra Kondyli

Abstract

Driver inattention and human error are the primary causes of traffic crashes. However, little is known about the relationship between driver aggressiveness and safety. Although several studies that group drivers into different classes based on their driving performance have been conducted, little has been done to explore how behavioral traits are linked to driver behavior. The study aims to link different driver profiles, assessed through psychological evaluations, with their likelihood of engaging in risky driving behaviors, as measured in a driving simulation experiment. By incorporating psychological factors into machine learning algorithms, our models were able to successfully relate self-reported decision-making and personality characteristics with actual driving actions. Our results hold promise toward refining existing models of driver behavior  by understanding the psychological and behavioral characteristics that influence the risk of crashes.


Md Mashfiq Rizvee

Energy Optimization in Multitask Neural Networks through Layer Sharing

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Han Wang


Abstract

Artificial Intelligence (AI) is being widely used in diverse domains such as industrial automation, traffic control, precision agriculture, and smart cities for major heavy lifting in terms of data analysis and decision making. However, the AI life- cycle is a major source of greenhouse gas (GHG) emission leading to devastating environmental impact. This is due to expensive neural architecture searches, training of countless number of models per day across the world, in-field AI processing of data in billions of edge devices, and advanced security measures across the AI life cycle. Modern applications often involve multitasking, which involves performing a variety of analyzes on the same dataset. These tasks are usually executed on resource-limited edge devices, necessitating AI models that exhibit efficiency across various measures such as power consumption, frame rate, and model size. To address these challenges, we introduce a novel neural network architecture model that incorporates a layer sharing principle to optimize the power usage. We propose a novel neural architecture, Layer Shared Neural Networks that merges multiple similar AI/NN tasks together (with shared layers) towards creating a single AI/NN model with reduced energy requirements and carbon footprint. The experimental findings reveal competitive accuracy and reduced power consumption. The layer shared model significantly reduces power consumption by 50% during training and 59.10% during inference causing as much as an 84.64% and 87.10% decrease in CO2 emissions respectively. 

  


Fairuz Shadmani Shishir

Parameter-Efficient Computational Drug Discovery using Deep Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

The accurate prediction of small molecule binding affinity and toxicity remains a central challenge in drug discovery, with significant implications for reducing development costs, improving candidate prioritization, and enhancing safety profiles. Traditional computational approaches, such as molecular docking and quantitative structure-activity relationship (QSAR) models, often rely on handcrafted features and require extensive domain knowledge, which can limit scalability and generalization to novel chemical scaffolds. Recent advances in language models (LMs), particularly those adapted to chemical representations such as SMILES (Simplified Molecular Input Line Entry System), have opened new ways for learning data-driven molecular representations that capture complex structural and functional properties. However, achieving both high binding affinity and low toxicity through a resource-efficient computational pipeline is inherently difficult due to the multi-objective nature of the task. This study presents a novel dual-paradigm approach to critical challenges in drug discovery: predicting small molecules with high binding affinity and low cardiotoxicity profiles. For binding affinity prediction, we implement a specialized graph neural network (GNN) architecture that operates directly on molecular structures represented as graphs, where atoms serve as nodes and bonds as edges. This topology-aware approach enables the model to capture complex spatial arrangements and electronic interactions critical for protein-ligand binding. For toxicity prediction, we leverage chemical language models (CLMs) fine-tuned with Low-Rank Adaptation (LoRA), allowing efficient adaptation of large pre-trained models to specialized toxicological endpoints while maintaining the generalized chemical knowledge embedded in the base model. Our hybrid methodology demonstrates significant improvements over existing computational approaches, with the GNN component achieving an average area under the ROC curve (AUROC) of 0.92 on three protein targets and the LoRA-adapted CLM reaching (AUROC) of 0.90 with 60% reduction in parameter usage in predicting cardiotoxicity. This work establishes a powerful computational framework that accelerates drug discovery by enabling both higher binding affinity and low toxicity compounds with optimized efficacy and safety profiles. 


Soma Pal

Truths about compiler optimization for state-of-the-art (SOTA) C/C++ compilers

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Esam El-Araby
Drew Davidson
Tamzidul Hoque
Jiang Yunfeng

Abstract

Compiler optimizations are critical for performance and have been extensively studied, especially for C/C++ language compilers. Our overall goal in this thesis is to investigate and compare the properties and behavior of optimization passes across multiple contemporary, state-of-the-art (SOTA)  C/C++ compilers to understand if they adopt similar optimization implementation and orchestration strategies. Given the maturity of pre-existing knowledge in the field, it seems conceivable that different compiler teams will adopt consistent optimization passes, pipeline and application techniques. However, our preliminary results indicate that such expectation may be misguided. If so, then we will attempt to understand the differences, and study and quantify their impact on the performance of generated code.

In our first work, we study and compare the behavior of profile-guided optimizations (PGO) in two popular SOTA C/C++ compilers, GCC and Clang. This study reveals many interesting, and several counter-intuitive, properties about PGOs in C/C++ compilers. The behavior and benefits of PGOs also vary significantly across our selected compilers. We present our observations, along with plans to further explore these inconsistencies in this report. Likewise, we have also measured noticeable differences in the performance delivered by optimizations across our compilers. We propose to explore and understand these differences in this work. We present further details regarding our proposed directions and planned experiments in this report. We hope that this work will show and suggest opportunities for compilers to learn from each other and motivate researchers to find mechanisms to combine the benefits of multiple compilers to deliver higher overall program performance.


Nyamtulla Shaik

AI Vision to Care: A QuadView of Deep Learning for Detecting Harmful Stimming in Autism

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Bo Luo
Dongjie Wang


Abstract

Stimming refers to repetitive actions or behaviors used to regulate sensory input or express feelings. Children with developmental disorders like autism (ASD) frequently perform stimming. This includes arm flapping, head banging, finger flicking, spinning, etc. This is exhibited by 80-90% of children with Autism, which is seen in 1 among 36 children in the US. Head banging is one of these self-stimulatory habits that can be harmful. If these behaviors are automatically identified and notified using live video monitoring, parents and other caregivers can better watch over and assist children with ASD.
Classifying these actions is important to recognize harmful stimming, so this study focuses on developing a deep learning-based approach for stimming action recognition. We implemented and evaluated four models leveraging three deep learning architectures based on Convolutional Neural Networks (CNNs), Autoencoders, and Vision Transformers. For the first time in this area, we use skeletal joints extracted from video sequences. Previous works relied solely on raw RGB videos, vulnerable to lighting and environmental changes. This research explores Deep Learning based skeletal action recognition and data processing techniques for a small unstructured dataset that consists of 89 home recorded videos collected from publicly available sources like YouTube. Our robust data cleaning and pre-processing techniques helped the integration of skeletal data in stimming action recognition, which performed better than state-of-the-art with a classification accuracy of up to 87%
In addition to using traditional deep learning models like CNNs for action recognition, this study is among the first to apply data-hungry models like Vision Transformers (ViTs) and Autoencoders for stimming action recognition on the dataset. The results prove that using skeletal data reduces the processing time and significantly improves action recognition, promising a real-time approach for video monitoring applications. This research advances the development of automated systems that can assist caregivers in more efficiently tracking stimming activities.


Alexander Rodolfo Lara

Creating a Faradaic Efficiency Graph Dataset Using Machine Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Sumaiya Shomaji
Kevin Leonard


Abstract

Just as the internet-of-things leverages machine learning over a vast amount of data produced by an innumerable number of sensors, the Internet of Catalysis program uses similar strategies with catalysis research. One application of the Internet of Catalysis strategy is treating research papers as datapoints, rich with text, figures, and tables. Prior research within the program focused on machine learning models applied strictly over text.

This project is the first step of the program in creating a machine learning model from the images of catalysis research papers. Specifically, this project creates a dataset of faradaic efficiency graphs using transfer learning from pretrained models. The project utilizes FasterRCNN_ResNet50_FPN, LayoutLMv3SequenceClassification, and computer vision techniques to recognize figures, extract all graphs, then classify the faradaic efficiency graphs.

Downstream of this project, researchers will create a graph reading model to integrate with large language models. This could potentially lead to a multimodal model capable of fully learning from images, tables, and texts of catalysis research papers. Such a model could then guide experimentation on reaction conditions, catalysts, and production.


Amin Shojaei

Scalable and Cooperative Multi-Agent Reinforcement Learning for Networked Cyber-Physical Systems: Applications in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alex Bardas
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicles (EVs). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.

As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. First, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state.

Second, we focus on the cooperative behavior of agents in distributed MARL frameworks, particularly under the central training with decentralized execution (CTDE) paradigm. We provide theoretical results and variance analysis for stochastic and deterministic cooperative MARL algorithms, including Multi-Agent Deep Deterministic Policy Gradient (MADDPG), Multi-Agent Proximal Policy Optimization (MAPPO), and Dueling MAPPO. These analyses highlight how coordinated learning can improve system-wide decision-making in uncertain and dynamic environments like EV networks.

Third, we address the scalability challenge in large-scale NCPS by introducing a hierarchical MARL framework based on a cluster-based architecture. This framework organizes agents into coordinated subgroups, improving scalability while preserving local coordination. We conduct a detailed variance analysis of this approach to demonstrate its effectiveness in reducing communication overhead and learning complexity. This analysis establishes a theoretical foundation for scalable and efficient control in large-scale smart grid applications.


Asrith Gudivada

Custom CNN for Object State Classification in Robotic Cooking

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

This project presents the development of a custom Convolutional Neural Network (CNN) designed to classify object states—such as sliced, diced, or peeled—in robotic cooking environments. Recognizing fine-grained object states is critical for context-aware manipulation yet remains a challenging task due to the visual similarity between states and the limited availability of cooking-specific datasets. To address these challenges, we built a lightweight, non-pretrained CNN trained on a curated dataset of 11 object states. Starting with a baseline architecture, we progressively enhanced the model using data augmentation, optimized dropout, batch normalization, Inception modules, and residual connections. These improvements led to a performance increase from ~45% to ~52% test accuracy. The final model demonstrates improved generalization and training stability, showcasing the effectiveness of combining classical and advanced deep learning techniques. This work contributes toward real-time state recognition for autonomous robotic cooking systems, with implications for assistive technologies in domestic and elder care settings.


Tanvir Hossain

Gamified Learning of Computing Hardware Fundamentals Using FPGA-Based Platform

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Tamzidul Hoque, Chair
Esam El-Araby
Sumaiya Shomaji


Abstract

The growing dependence on electronic systems in consumer and mission critical domains requires engineers who understand the inner workings of digital hardware. Yet many students bypass hardware electives, viewing them as abstract, mathematics heavy, and less attractive than software courses. Escalating workforce shortages in the semiconductor industry and the recent global chip‑supply crisis highlight the urgent need for graduates who can bridge hardware knowledge gaps across engineering sectors. In this thesis, I have developed FPGA‑based games, embedded in inclusive curricular modules, which can make hardware concepts accessible while fostering interest, self‑efficacy, and positive outcome expectations in hardware engineering. A design‑based research methodology guided three implementation cycles: a pilot with seven diverse high‑school learners, a multiweek residential summer camp with high‑school students, and a fifteen‑week multidisciplinary elective enrolling early undergraduate engineering students. The learning experiences targeted binary arithmetic, combinational and sequential logic, state‑machine design, and hardware‑software co‑design. Learners also moved through the full digital‑design flow, HDL coding, functional simulation, synthesis, place‑and‑route, and on‑board verification. In addition, learners explored timing analysis, register‑transfer‑level abstractions, and simple processor datapaths to connect low‑level circuits with system‑level behavior. Mixed‑method evidence was gathered through pre‑ and post‑content quizzes, validated surveys of self‑efficacy and outcome expectations, focus groups, classroom observations, and gameplay analytics. Paired‑sample statistics showed reliable gains in hardware‑concept mastery, self‑efficacy, and outcome expectations. This work contributes a replicable framework for translating foundational hardware topics into modular, game‑based learning activities, empirical evidence of their effectiveness across secondary and early‑college contexts, and design principles for educators who seek to integrate equitable, hands‑on hardware experiences into existing curricula.


Hara Madhav Talasila

Radiometric Calibration of Radar Depth Sounder Data Products

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Patrick McCormick
James Stiles
Jilu Li
Leigh Stearns

Abstract

Although the Center for Remote Sensing of Ice Sheets (CReSIS) performs several radar calibration steps to produce Operation IceBridge (OIB) radar depth sounder data products, these datasets are not radiometrically calibrated and the swath array processing uses ideal (rather than measured [calibrated]) steering vectors. Any errors in the steering vectors, which describe the response of the radar as a function of arrival angle, will lead to errors in positioning and backscatter that subsequently affect estimates of basal conditions, ice thickness, and radar attenuation. Scientific applications that estimate physical characteristics of surface and subsurface targets from the backscatter are limited with the current data because it is not absolutely calibrated. Moreover, changes in instrument hardware and processing methods for OIB over the last decade affect the quality of inter-seasonal comparisons. Recent methods which interpret basal conditions and calculate radar attenuation using CReSIS OIB 2D radar depth sounder echograms are forced to use relative scattering power, rather than absolute methods.

As an active target calibration is not possible for past field seasons, a method that uses natural targets will be developed. Unsaturated natural target returns from smooth sea-ice leads or lakes are imaged in many datasets and have known scattering responses. The proposed method forms a system of linear equations with the recorded scattering signatures from these known targets, scattering signatures from crossing flight paths, and the radiometric correction terms. A least squares solution to optimize the radiometric correction terms is calculated, which minimizes the error function representing the mismatch in expected and measured scattering. The new correction terms will be used to correct the remaining mission data. The radar depth sounder data from all OIB campaigns can be reprocessed to produce absolutely calibrated echograms for the Arctic and Antarctic. A software simulator will be developed to study calibration errors and verify the calibration software. The software for processing natural targets and crossovers will be made available in CReSIS’s open-source polar radar software toolbox. The OIB data will be reprocessed with new calibration terms, providing to the data user community a complete set of radiometrically calibrated radar echograms for the CReSIS OIB radar depth sounder for the first time.


Christopher Ord

A Hardware-Agnostic Simultaneous Transmit And Receive (STAR) Architecture for the Transmission of Non-Repeating FMCW Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rachel Jarvis, Chair
Shannon Blunt
Patrick McCormick


Abstract

With the increasing congestion of the usable RF spectrum, it is increasingly necessary for communication and radar systems to share the same frequencies without disturbing one another. To accomplish this, research has focused on designing a class of non-repeating radar waveforms that appear as noise at the receiver of uncooperative systems, but the peak power from high-power pulsed systems can still overwhelm nearby in-band systems. Therefore, to minimize peak power while maximizing the total energy on target, radar systems must transition to operating at a 100% duty cycle, which inherently requires Simultaneous Transmit and Receive (STAR) operation.

One inherent difficulty when operating monostatic STAR systems is the direct path coupling interference that can saturate a number of components in the radar’s receive chain, which makes digital processing methods that remove this interference ineffective. This thesis proposes a method to reduce the self-interference between the radar’s transmitter in receiver prior to the receiver’s sensitive components to increase the power that the radar can transmit at. By using a combination of tests that manipulate the timing, phase, and magnitude of a secondary waveform that is injected into the radar just before the receiver, upwards of 35.0 dB of self-interference cancellation is achieved for radar waveforms with bandwidths of up to 100 MHz at both S-band and X-band in both simulation and open-air testing.


Fatima Al-Shaikhli

Optical Fiber Measurements: Leveraging Coherent FMCW Techniques

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical fiber technology have proven to be invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical fiber sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceiver systems to develop novel measurement techniques for characterizing optical fiber properties. Specifically, our goal is to leverage a digitally chirped frequency-modulated continuous wave (FMCW) to extract detailed information about optical fiber characteristics, as well as target range. Through this approach, we aim to enable more accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) self-homodyne coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection, and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.         

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Multi-target detection is demonstrated experimentally, and while only amplitude modulation is required in the LiDAR transmitter, the phase-diversity coherent receiver enables simultaneous detection of both range and velocity for each target, along with the sign of the target’s velocity.

In addition, we demonstrate a polarization-sensitive OFDR system utilizing a commercially available digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately , a chirping bandwidth, and a measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we can measure birefringence vectors along the fiber, providing not only the magnitude of birefringence but also the direction of any external pressure applied to the fiber.


Landen Doty

Assessing the Effects of Source Language on Binary Similarity Tools

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Alex Bardas
Drew Davidson

Abstract

Binary similarity is a fundamental technique that enables software analysis practitioners to compare machine-level code at scale and with fine granularity. With application in software reverse engineering, vulnerability research, malware attribution and more, state-of-the-art binary similarity tools have undergone thorough research and development to account for variations in compilers, optimizations, machine architectures, and even obfuscations. And, although these tools aim to compare and detect binary-level code segments generated from similar or identical source code, no preexisting work has investigated the effects of source languages other than C and C++. This thesis addresses this research gap by presenting a thorough investigation of SOTA binary similarity tools when applied to modern compiled languages, Rust and Golang.

To adequately evaluate the capabilities of the available binary similarity approaches, this work includes three distinct tools - BSim, a new component of the Ghidra Software Reverse Engineering Framework, which utilizes a clustering based similarity mechanism; BinDiff, an industry-recognized tool using graph-based comparisons; and jTrans, a BERT-based model fine-tuned to the binary similarity task. First, to enable this work, we introduce a new dataset of Rust and Golang binaries compiled from leading open-source projects in the Homebrew and Arch Linux repositories. Comprised of 800 binaries and over 1 million functions, this dataset was built to represent a broad range of implementation styles, application diversity, and source language features. Next, the main investigation of this thesis is presented wherein we asses each approach's ability to accurately report semantically equivalent functions compiled from the same source code. Results across the three tools reveal a systematic degradation of precision when comparing binaries produced by Rust and Go rather than those produced by C and C++. Finally, we provide a technical demonstration which highlights the implications of these results and discuss near- and long-term solutions to more adequately equip binary analysis practitioners.  
 


Past Defense Notices

Dates

Jarrett Zeliff

An Analysis of Bluetooth Mesh Security Features in the Context of Secure Communications

When & Where:


Eaton Hall, Room 1

Committee Members:

Alexandru Bardas, Chair
Drew Davidson
Fengjun Li


Abstract

Significant developments in communication methods to help support at-risk populations have increased over the last 10 years. We view at-risk populations as a group of people present in environments where the use of infrastructure or electricity, including telecommunications, is censored and/or dangerous. Security features that accompany these communication mechanisms are essential to protect the confidentiality of its user base and the integrity and availability of the communication network.

In this work, we look at the feasibility of using Bluetooth Mesh as a communication network and analyze the security features that are inherent to the protocol. Through this analysis we determine the strengths and weaknesses of Bluetooth Mesh security features when used as a messaging medium for at risk populations and provide improvements to current shortcomings. Our analysis includes looking at the Bluetooth Mesh Networking Security Fundamentals as described by the Bluetooth Sig: Encryption and Authentication, Separation of Concerns, Area isolation, Key Refresh, Message Obfuscation, Replay Attack Protection, Trashcan Attack Protection, and Secure Device Provisioning.  We look at how each security feature is implemented and determine if these implementations are sufficient in protecting the users from various attack vectors. For example, we examined the Blue Mirror attack, a reflection attack during the provisioning process which leads to the compromise of network keys, while also assessing the under-researched key refresh mechanism. We propose a mechanism to address Blue-Mirror-oriented attacks with the goal of creating a more secure provisioning process.  To analyze the key refresh mechanism, we implemented our own full-fledged Bluetooth Mesh network and implemented a key refresh mechanism. Through this we form an assessment of the throughput, range, and impacts of a key refresh in both lab and field environments that demonstrate the suitability of our solution as a secure communication method.


Daniel Johnson

Probability-Aware Selective Protection for Sparse Iterative Solvers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Hongyang Sun, Chair
Perry Alexander
Zijun Yao


Abstract

With the increasing scale of high-performance computing (HPC) systems, transient bit-flip errors are now more likely than ever, posing a threat to long-running scientific applications. A substantial portion of these applications involve the simulation of partial differential equations (PDEs) modeling physical processes over discretized spatial and temporal domains, with some requiring the solving of sparse linear systems. While these applications are often paired with system-level application-agnostic resilience techniques such as checkpointing and replication, the utilization of these techniques imposes significant overhead. In this work, we present a probability-aware framework that produces low-overhead selective protection schemes for the widely used Preconditioned Conjugate Gradient (PCG) method, whose performance can heavily degrade due to error propagation through the sparse matrix-vector multiplication (SpMV) operation. Through the use of a straightforward mathematical model and an optimized machine learning model, our selective protection schemes incorporate error probability to protect only certain crucial operations. An experimental evaluation using 15 matrices from the SuiteSparse Matrix Collection demonstrates that our protection schemes effectively reduce resilience overheads, often outperforming or matching both baseline and established protection schemes across all error probabilities.


Javaria Ahmad

Discovering Privacy Compliance Issues in IoT Apps and Alexa Skills Using AI and Presenting a Mechanism for Enforcing Privacy Compliance

When & Where:


LEEP2, Room 2425

Committee Members:

Bo Luo, Chair
Alex Bardas
Tamzidul Hoque
Fengjun Li
Michael Zhuo Wang

Abstract

The growth of IoT and voice assistant (VA) apps poses increasing concerns about sensitive data leaks. While privacy policies are required to describe how these apps use private user data (i.e., data practice), problems such as missing, inaccurate, and inconsistent policies have been repeatedly reported. Therefore, it is important to assess the actual data practice in apps and identify the potential gaps between the actual and declared data usage. We find that app stores lack in regulating the compliance between the app practices and their declaration, so we use AI to discover the compliance issues in these apps to assist the regulators and developers. For VA apps, we also develop a mechanism to enforce the compliance using AI. In this work, we conduct a measurement study using our framework called IoTPrivComp, which applies an automated analysis of IoT apps’ code and privacy policies to identify compliance gaps. We collect 1,489 IoT apps with English privacy policies from the Play Store. IoTPrivComp detects 532 apps with sensitive external data flows, among which 408 (76.7%) apps have undisclosed data leaks. Moreover, 63.4% of the data flows that involve health and wellness data are inconsistent with the practices disclosed in the apps’ privacy policies. Next, we focus on the compliance issues in skills. VAs, such as Amazon Alexa, are integrated with numerous devices in homes and cars to process user requests using apps called skills. With their growing popularity, VAs also pose serious privacy concerns. Sensitive user data captured by VAs may be transmitted to third-party skills without users’ consent or knowledge about how their data is processed. Privacy policies are a standard medium to inform the users of the data practices performed by the skills. However, privacy policy compliance verification of such skills is challenging, since the source code is controlled by the skill developers, who can make arbitrary changes to the behaviors of the skill without being audited; hence, conventional defense mechanisms using static/dynamic code analysis can be easily escaped. We present Eunomia, the first real-time privacy compliance firewall for Alexa Skills. As the skills interact with the users, Eunomia monitors their actions by hijacking and examining the communications from the skills to the users, and validates them against the published privacy policies that are parsed using a BERT-based policy analysis module. When non-compliant skill behaviors are detected, Eunomia stops the interaction and warns the user. We evaluate Eunomia with 55,898 skills on Amazon skills store to demonstrate its effectiveness and to provide a privacy compliance landscape of Alexa skills.


Xiangyu Chen

Toward Efficient Deep Learning for Computer Vision Applications

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Fengjun Li
Hongguo Xu

Abstract

Deep learning leads the performance in many areas of computer vision. However, after a decade of research, it tends to require larger datasets and more complex models, leading to heightened resource consumption across all fronts. Regrettably, meeting these requirements proves challenging in many real-life scenarios. First, both data collection and labeling processes entail substantial labor and time investments. This challenge becomes especially pronounced in domains such as medicine, where identifying rare diseases demands meticulous data curation. Secondly, the large size of state-of-the-art models, such as ViT, Stable Diffusion, and ConvNext, hinders their deployment on resource-constrained platforms like mobile devices. Research indicates pervasive redundancies within current neural network structures, exacerbating the issue. Lastly, even with ample datasets and optimized models, the time required for training and inference remains prohibitive in certain contexts. Consequently, there is a burgeoning interest among researchers in exploring avenues for efficient artificial intelligence.

This study endeavors to delve into various facets of efficiency within computer vision, including data efficiency, model efficiency, as well as training and inference efficiency. The data efficiency is improved from the perspective of increasing information brought by given image inputs and reducing redundancies of RGB image formats. To achieve this, we propose to integrate both spatial and frequency representations to finetune the classifier. Additionally, we propose explicitly increasing the input information density in the frequency domain by deleting unimportant frequency channels. For model efficiency, we scrutinize the redundancies present in widely used vision transformers. Our investigation reveals that trivial attention in their attention modules covers useful non-trivial attention due to its large amount. We propose mitigating the impact of accumulated trivial attention weights. To increase training efficiency, we propose SuperLoRA, a generation of LoRA adapter, to fine-tune pretrained models with few iterations and extremely-low parameters. Finally, a model simplification pipeline is proposed to further reduce inference time on mobile devices. By addressing these challenges, we aim to advance the practicality and performance of computer vision systems in real-world applications.


Krushi Patel

Image Classification & Segmentation based on Enhanced CNN and Transformer Networks

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Fengjun Li, Chair
Prasad Kulkarni
Bo Luo
Cuncong Zhong
Xinmai Yang

Abstract

Convolutional Neural Networks (CNNs) have significantly enhanced performance across various computer vision tasks such as image recognition and segmentation, owing to their robust representation capabilities. To further boost CNN performance, a self-attention module is integrated after each network layer. Transformer-based models, which leverage a multi-head self-attention module as their core component, have recently demonstrated outstanding performance. However, several challenges persist, including the limitation to class-specific channels in CNNs, the constrained receptive field in local transformers, and the incorporation of redundant features and the absence of multi-scale features in U-Net type segmentation architectures.

In our study, we propose new strategies to tackle these challenges. (1) We propose a novel channel-based self-attention module to diversify the focus more on the discriminative and significant channels, and the module can be embedded at the end of any backbone network for image classification. (2) To mitigate noise introduced by shallow encoder layers in U-Net architectures, we substitute skip connections with an Adaptive Global Context Module (AGCM). Additionally, we introduce the Semantic Feature Enhancement Module (SFEM) to enhance multi-scale features in polyp segmentation. (3) We introduce a Multi-scaled Overlapped Attention (MOA) mechanism within local transformer-based networks for image classification, facilitating the establishment of long-range dependencies and initiation of neighborhood window communication. (4) We propose a pioneering Fuzzy Attention Module designed to prioritize challenging pixels, thereby augmenting polyp segmentation performance. (5) We develop a novel dense attention gate module that aggregates features from all preceding layers to compute attention scores, refining global features in polyp segmentation tasks. Moreover, we design a new multi-layer horizontally extended decoder architecture to enhance local feature refinement in polyp segmentation.


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to separate and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. As a strategical converse, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar systems that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of Space-Frequency Template Error (SFTE) is implemented on a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on SFTE for narrowband arrays. Finally, optimization via alternating projections is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars using pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived, and experimentally validated, that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. Several modifications to the demonstrated algorithms are proposed to refine implementation, enhance performance, and reflect real-world application to the degree that numerical simulations can.


Anna Fritz

A Formally Verified Infrastructure for Negotiating Remote Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Fengjun Li
Emily Witt

Abstract

Semantic remote attestation is the process of gathering and appraising evidence to establish trust in a remote system. Remote attestation occurs at the request of an appraiser or relying party and proceeds with a target system executing an attestation protocol that invokes attestation services in a specific order to generate and bundle evidence. An appraiser may then evaluate the generated evidence to establish trust in the target's state.  In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement operation. To solve this problem, we introduce Attestation Protocol Negotiation or the process of establishing a mutually agreed upon protocol that satisfies the relying party's desire for comprehensive information and the target's desire for constrained disclosure.

    This research explores the formal modeling and verification of negotiation, introducing refinement and selection procedures to enable communicating peers to achieve their goals. First, we explore the formalization of refinement or the process by which a target generates executable protocols. Here we focus on a definition of system specifications through manifests, protocol sufficiency and soundness, policy representation, and the negotiation structure. By using our formal models to represent and verify negotiation's properties we can statically determine that a provably sound, sufficient, and executable protocol is produced. Next, we present a formalized model for protocol selection, introducing and proving a preorder over Copland remote attestation protocols to facilitate selection of the most adversary-constrained protocol. With this modeling, we prove selected protocols increase the difficulty of an active adversary. By addressing the target's capability to generate provably executable protocols and the ability to order these protocols, this methodology has the potential to revolutionize the attestation protocol provisioning process.


Arjun Dhage Ramachandra

Implementing object Detection for Real-World Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Cuncong Zhong


Abstract

 The advent of deep learning has enabled the development of powerful AI models that are being used in fields such as medicine, surveillance monitoring, optimizing manufacturing processes, allowing robots to navigate their environment, chatbots, and much more. These applications are only made possible because of the enormous research in the fields of Neural networks and deep learning. In this paper, I’ll be discussing a branch of Neural Networks called Convolution Neural Network (CNN), and how they are used for object detection tasks for detecting and classifying objects in an image. I’ll also discuss a popular object detection framework called Single Shot Multibox Detector (SSD) and implement it in my web application project which allows users to detect objects in images and search for images based on the presence of objects. The main aim of the project was to allow easy access to perform detections with a few clicks. 


Kaidong Li

Accurate and Robust Object Detection and Classification Based on Deep Neural Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Taejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao

Abstract

Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes. Ultimately, the realms of 2D image classification and 3D point cloud classification represent distinct avenues of research, each relying on significantly different architectures. Given the unique characteristics of these data types, it is not feasible to employ models interchangeably between them.

My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To gap the domain difference in 3D and 2D classification and to increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification. We experimented with various 3D-to-2D mapping algorithm, bridging the gap between 2D and 3D classification. Furthermore, we empirically validate the internal constrained optimization mechanism effectively defend adversarial attacks through implicit gradients.


Andrew Mertz

Multiple Input Single Output (MISO) Receive Processing Techniques for Linear Frequency Modulated Continuous Wave Frequency Diverse Array (LFMCW-FDA) Transmit Structures

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Patrick McCormick, Chair
Chris Allen
Shannon Blunt
James Stiles

Abstract

This thesis focuses on the multiple processing techniques that can be applied to a single receive element co-located with a Frequency Diverse Array (FDA) transmission structure that illuminates a large volume to estimate the scattering characteristics of objects within the illuminated space in the range, Doppler, and spatial dimensions. FDA transmissions consist of a number of evenly spaced transmitting elements all of which are radiating a linear frequency modulated (LFM) waveform. The elements are configured into a Uniform Linear Array (ULA) and the waveform of each element is separated by a frequency spacing across the elements where the time duration of the chirp is inversely proportional to an integer multiple of the frequency spacing between elements. The complex transmission structure created by this arrangement of multiple transmitting elements can be received and processed by a single receive element. Furthermore, multiple receive processing techniques, each with their own advantages and disadvantages, can be applied to the data received from the single receive element to estimate the range, velocity, and spatial direction of targets in the illuminated volume relative to the co-located transmit array and receive element. Three different receive processing techniques that can be applied to FDA transmissions are explored. Two of these techniques are novel to this thesis, including the spatial matched filter processing technique for FDA transmission structures, and stretch processing using virtual array processing for FDA transmissions. Additionally, this thesis introduces a new type of FDA transmission structure referred to as ”slow-time” FDA.