Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Masoud Ghazikor

Distributed Optimization and Control Algorithms for UAV Networks in Unlicensed Spectrum Bands

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni


Abstract

UAVs have emerged as a transformative technology for various applications, including emergency services, delivery, and video streaming. Among these, video streaming services in areas with limited physical infrastructure, such as disaster-affected areas, play a crucial role in public safety. UAVs can be rapidly deployed in search and rescue operations to efficiently cover large areas and provide live video feeds, enabling quick decision-making and resource allocation strategies. However, ensuring reliable and robust UAV communication in such scenarios is challenging, particularly in unlicensed spectrum bands, where interference from other nodes is a significant concern. To address this issue, developing a distributed transmission control and video streaming is essential to maintaining a high quality of service, especially for UAV networks that rely on delay-sensitive data.

In this MSc thesis, we study the problem of distributed transmission control and video streaming optimization for UAVs operating in unlicensed spectrum bands. We develop a cross-layer framework that jointly considers three inter-dependent factors: (i) in-band interference introduced by ground-aerial nodes at the physical layer, (ii) limited-size queues with delay-constrained packet arrival at the MAC layer, and (iii) video encoding rate at the application layer. This framework is designed to optimize the average throughput and PSNR by adjusting fading thresholds and video encoding rates for an integrated aerial-ground network in unlicensed spectrum bands. Using consensus-based distributed algorithm and coordinate descent optimization, we develop two algorithms: (i) Distributed Transmission Control (DTC) that dynamically adjusts fading thresholds to maximize the average throughput by mitigating trade-offs between low-SINR transmission errors and queue packet losses, and (ii) Joint Distributed Video Transmission and Encoder Control (JDVT-EC) that optimally balances packet loss probabilities and video distortions by jointly adjusting fading thresholds and video encoding rates. Through extensive numerical analysis, we demonstrate the efficacy of the proposed algorithms under various scenarios.


Srijanya Chetikaneni

Plant Disease Prediction Using Transfer Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

Timely detection of plant diseases is critical to safeguarding crop yields and ensuring global food security. This project presents a deep learning-based image classification system to identify plant diseases using the publicly available PlantVillage dataset. The core objective was to evaluate and compare the performance of a custom-built Convolutional Neural Network (CNN) with two widely used transfer learning models—EfficientNetB0 and MobileNetV3Small. 

All models were trained on augmented image data resized to 224×224 pixels, with preprocessing tailored to each architecture. The custom CNN used simple normalization, whereas EfficientNetB0 and MobileNetV3Small utilized their respective pre-processing methods to standardize the pretrained ImageNet domain inputs. To improve robustness, the training pipeline included data augmentation, class weighting, and early stopping.

Training was conducted using the Adam optimizer and categorical cross-entropy loss over 30 epochs, with performance assessed using accuracy, loss, and training time metrics. The results revealed that transfer learning models significantly outperformed the custom CNN. EfficientNetB0 achieved the highest accuracy, making it ideal for high-precision applications, while MobileNetV3Small offered a favorable balance between speed and accuracy, making it suitable for lightweight, real-time inference on edge devices.

This study validates the effectiveness of transfer learning for plant disease detection tasks and emphasizes the importance of model-specific preprocessing and training strategies. It provides a foundation for deploying intelligent plant health monitoring systems in practical agricultural environments.


Ahmet Soyyigit

Anytime Computing Techniques for LiDAR-based Perception In Cyber-Physical Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Michael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri

Abstract

The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advances in artificial intelligence (AI), particularly the introduction of deep neural networks (DNN), have significantly improved the autonomy of CPS, notably by boosting perception capabilities.

CPS perception aims to discern, classify, and track objects of interest in the operational environment, a task that is considerably challenging for computers in a three-dimensional (3D) space. For this task, the use of LiDAR sensors and processing their readings with DNNs has become popular because of their excellent performance However, in CPS such as self-driving cars and drones, object detection must be not only accurate but also timely, posing a challenge due to the high computational demand of LiDAR object detection DNNs. Satisfying this demand is particularly challenging for on-board computational platforms due to size, weight, and power constraints. Therefore, a trade-off between accuracy and latency must be made to ensure that both requirements are satisfied. Importantly, the required trade-off is operational environment dependent and should be weighted more on accuracy or latency dynamically at runtime. However, LiDAR object detection DNNs cannot dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). Prior research aimed at anytime computing for object detection DNNs using camera images is not applicable to LiDAR-based detection due to architectural differences. This thesis addresses these challenges by proposing three novel techniques: Anytime-LiDAR, which enables early termination with reasonable accuracy; VALO (Versatile Anytime LiDAR Object Detection), which implements deadline-aware input data scheduling; and MURAL (Multi-Resolution Anytime Framework for LiDAR Object Detection), which introduces dynamic resolution scaling. Together, these innovations enable LiDAR-based object detection DNNs to make effective trade-offs between latency and accuracy under varying operational conditions, advancing the practical deployment of LiDAR object detection DNNs.


Rahul Purswani

Finetuning Llama on custom data for QA tasks

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Drew Davidson
Prasad Kulkarni


Abstract

Fine-tuning large language models (LLMs) for domain-specific use cases, such as question answering, offers valuable insights into how their performance can be tailored to specialized information needs. In this project, we focused on the University of Kansas (KU) as our target domain. We began by scraping structured and unstructured content from official KU webpages, covering a wide array of student-facing topics including campus resources, academic policies, and support services. From this content, we generated a diverse set of question-answer pairs to form a high-quality training dataset. LLaMA 3.2 was then fine-tuned on this dataset to improve its ability to answer KU-specific queries with greater relevance and accuracy. Our evaluation revealed mixed results—while the fine-tuned model outperformed the base model on most domain-specific questions, the original model still had an edge in handling ambiguous or out-of-scope prompts. These findings highlight the strengths and limitations of domain-specific fine-tuning, and provide practical takeaways for customizing LLMs for real-world QA applications.


Rithvij Pasupuleti

A Machine Learning Framework for Identifying Bioinformatics Tools and Database Names in Scientific Literature

When & Where:


LEEP2, Room 2133

Committee Members:

Cuncong Zhong, Chair
Dongjie Wang
Han Wang
Zijun Yao

Abstract

The absence of a single, comprehensive database or repository cataloging all bioinformatics databases and software creates a significant barrier for researchers aiming to construct computational workflows. These workflows, which often integrate 10–15 specialized tools for tasks such as sequence alignment, variant calling, functional annotation, and data visualization, require researchers to explore diverse scientific literature to identify relevant resources. This process demands substantial expertise to evaluate the suitability of each tool for specific biological analyses, alongside considerable time to understand their applicability, compatibility, and implementation within a cohesive pipeline. The lack of a central, updated source leads to inefficiencies and the risk of using outdated tools, which can affect research quality and reproducibility. Consequently, there is a critical need for an automated, accurate tool to identify bioinformatics databases and software mentions directly from scientific texts, streamlining workflow development and enhancing research productivity. 

 

The bioNerDS system, a prior effort to address this challenge, uses a rule-based named entity recognition (NER) approach, achieving an F1 score of 63% on an evaluation set of 25 articles from BMC Bioinformatics and PLoS Computational Biology. By integrating the same set of features such as context patterns, word characteristics and dictionary matches into a machine learning model, we developed an approach using an XGBoost classifier. This model, carefully tuned to address the extreme class imbalance inherent in NER tasks through synthetic oversampling and refined via systematic hyperparameter optimization to balance precision and recall, excels at capturing complex linguistic patterns and non-linear relationships, ensuring robust generalization. It achieves an F1 score of 82% on the same evaluation set, significantly surpassing the baseline. By combining rule-based precision with machine learning adaptability, this approach enhances accuracy, reduces ambiguities, and provides a robust tool for large-scale bioinformatics resource identification, facilitating efficient workflow construction. Furthermore, this methodology holds potential for extension to other technological domains, enabling similar resource identification in fields like data science, artificial intelligence, or computational engineering.


Vishnu Chowdary Madhavarapu

Automated Weather Classification Using Transfer Learning

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

This project presents an automated weather classification system utilizing transfer learning with pre-trained convolutional neural networks (CNNs) such as VGG19, InceptionV3, and ResNet50. Designed to classify weather conditions—sunny, cloudy, rainy, and sunrise—from images, the system addresses the challenge of limited labeled data by applying data augmentation techniques like zoom, shear, and flip, expanding the dataset images. By fine-tuning the final layers of pre-trained models, the solution achieves high accuracy while significantly reducing training time. VGG19 was selected as the baseline model for its simplicity, strong feature extraction capabilities, and widespread applicability in transfer learning scenarios. The system was trained using the Adam optimizer and evaluated on key performance metrics including accuracy, precision, recall, and F1 score. To enhance user accessibility, a Flask-based web interface was developed, allowing real-time image uploads and instant weather classification. The results demonstrate that transfer learning, combined with robust data preprocessing and fine-tuning, can produce a lightweight and accurate weather classification tool. This project contributes toward scalable, real-time weather recognition systems that can integrate into IoT applications, smart agriculture, and environmental monitoring.


RokunuzJahan Rudro

Using Machine Learning to Classify Driver Behavior from Psychological Features: An Exploratory Study

When & Where:


Eaton Hall, Room 1A

Committee Members:

Sumaiya Shomaji, Chair
David Johnson
Zijun Yao
Alexandra Kondyli

Abstract

Driver inattention and human error are the primary causes of traffic crashes. However, little is known about the relationship between driver aggressiveness and safety. Although several studies that group drivers into different classes based on their driving performance have been conducted, little has been done to explore how behavioral traits are linked to driver behavior. The study aims to link different driver profiles, assessed through psychological evaluations, with their likelihood of engaging in risky driving behaviors, as measured in a driving simulation experiment. By incorporating psychological factors into machine learning algorithms, our models were able to successfully relate self-reported decision-making and personality characteristics with actual driving actions. Our results hold promise toward refining existing models of driver behavior  by understanding the psychological and behavioral characteristics that influence the risk of crashes.


Md Mashfiq Rizvee

Energy Optimization in Multitask Neural Networks through Layer Sharing

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Han Wang


Abstract

Artificial Intelligence (AI) is being widely used in diverse domains such as industrial automation, traffic control, precision agriculture, and smart cities for major heavy lifting in terms of data analysis and decision making. However, the AI life- cycle is a major source of greenhouse gas (GHG) emission leading to devastating environmental impact. This is due to expensive neural architecture searches, training of countless number of models per day across the world, in-field AI processing of data in billions of edge devices, and advanced security measures across the AI life cycle. Modern applications often involve multitasking, which involves performing a variety of analyzes on the same dataset. These tasks are usually executed on resource-limited edge devices, necessitating AI models that exhibit efficiency across various measures such as power consumption, frame rate, and model size. To address these challenges, we introduce a novel neural network architecture model that incorporates a layer sharing principle to optimize the power usage. We propose a novel neural architecture, Layer Shared Neural Networks that merges multiple similar AI/NN tasks together (with shared layers) towards creating a single AI/NN model with reduced energy requirements and carbon footprint. The experimental findings reveal competitive accuracy and reduced power consumption. The layer shared model significantly reduces power consumption by 50% during training and 59.10% during inference causing as much as an 84.64% and 87.10% decrease in CO2 emissions respectively. 

  


Fairuz Shadmani Shishir

Parameter-Efficient Computational Drug Discovery using Deep Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

The accurate prediction of small molecule binding affinity and toxicity remains a central challenge in drug discovery, with significant implications for reducing development costs, improving candidate prioritization, and enhancing safety profiles. Traditional computational approaches, such as molecular docking and quantitative structure-activity relationship (QSAR) models, often rely on handcrafted features and require extensive domain knowledge, which can limit scalability and generalization to novel chemical scaffolds. Recent advances in language models (LMs), particularly those adapted to chemical representations such as SMILES (Simplified Molecular Input Line Entry System), have opened new ways for learning data-driven molecular representations that capture complex structural and functional properties. However, achieving both high binding affinity and low toxicity through a resource-efficient computational pipeline is inherently difficult due to the multi-objective nature of the task. This study presents a novel dual-paradigm approach to critical challenges in drug discovery: predicting small molecules with high binding affinity and low cardiotoxicity profiles. For binding affinity prediction, we implement a specialized graph neural network (GNN) architecture that operates directly on molecular structures represented as graphs, where atoms serve as nodes and bonds as edges. This topology-aware approach enables the model to capture complex spatial arrangements and electronic interactions critical for protein-ligand binding. For toxicity prediction, we leverage chemical language models (CLMs) fine-tuned with Low-Rank Adaptation (LoRA), allowing efficient adaptation of large pre-trained models to specialized toxicological endpoints while maintaining the generalized chemical knowledge embedded in the base model. Our hybrid methodology demonstrates significant improvements over existing computational approaches, with the GNN component achieving an average area under the ROC curve (AUROC) of 0.92 on three protein targets and the LoRA-adapted CLM reaching (AUROC) of 0.90 with 60% reduction in parameter usage in predicting cardiotoxicity. This work establishes a powerful computational framework that accelerates drug discovery by enabling both higher binding affinity and low toxicity compounds with optimized efficacy and safety profiles. 


Soma Pal

Truths about compiler optimization for state-of-the-art (SOTA) C/C++ compilers

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Esam El-Araby
Drew Davidson
Tamzidul Hoque
Jiang Yunfeng

Abstract

Compiler optimizations are critical for performance and have been extensively studied, especially for C/C++ language compilers. Our overall goal in this thesis is to investigate and compare the properties and behavior of optimization passes across multiple contemporary, state-of-the-art (SOTA)  C/C++ compilers to understand if they adopt similar optimization implementation and orchestration strategies. Given the maturity of pre-existing knowledge in the field, it seems conceivable that different compiler teams will adopt consistent optimization passes, pipeline and application techniques. However, our preliminary results indicate that such expectation may be misguided. If so, then we will attempt to understand the differences, and study and quantify their impact on the performance of generated code.

In our first work, we study and compare the behavior of profile-guided optimizations (PGO) in two popular SOTA C/C++ compilers, GCC and Clang. This study reveals many interesting, and several counter-intuitive, properties about PGOs in C/C++ compilers. The behavior and benefits of PGOs also vary significantly across our selected compilers. We present our observations, along with plans to further explore these inconsistencies in this report. Likewise, we have also measured noticeable differences in the performance delivered by optimizations across our compilers. We propose to explore and understand these differences in this work. We present further details regarding our proposed directions and planned experiments in this report. We hope that this work will show and suggest opportunities for compilers to learn from each other and motivate researchers to find mechanisms to combine the benefits of multiple compilers to deliver higher overall program performance.


Nyamtulla Shaik

AI Vision to Care: A QuadView of Deep Learning for Detecting Harmful Stimming in Autism

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Bo Luo
Dongjie Wang


Abstract

Stimming refers to repetitive actions or behaviors used to regulate sensory input or express feelings. Children with developmental disorders like autism (ASD) frequently perform stimming. This includes arm flapping, head banging, finger flicking, spinning, etc. This is exhibited by 80-90% of children with Autism, which is seen in 1 among 36 children in the US. Head banging is one of these self-stimulatory habits that can be harmful. If these behaviors are automatically identified and notified using live video monitoring, parents and other caregivers can better watch over and assist children with ASD.
Classifying these actions is important to recognize harmful stimming, so this study focuses on developing a deep learning-based approach for stimming action recognition. We implemented and evaluated four models leveraging three deep learning architectures based on Convolutional Neural Networks (CNNs), Autoencoders, and Vision Transformers. For the first time in this area, we use skeletal joints extracted from video sequences. Previous works relied solely on raw RGB videos, vulnerable to lighting and environmental changes. This research explores Deep Learning based skeletal action recognition and data processing techniques for a small unstructured dataset that consists of 89 home recorded videos collected from publicly available sources like YouTube. Our robust data cleaning and pre-processing techniques helped the integration of skeletal data in stimming action recognition, which performed better than state-of-the-art with a classification accuracy of up to 87%
In addition to using traditional deep learning models like CNNs for action recognition, this study is among the first to apply data-hungry models like Vision Transformers (ViTs) and Autoencoders for stimming action recognition on the dataset. The results prove that using skeletal data reduces the processing time and significantly improves action recognition, promising a real-time approach for video monitoring applications. This research advances the development of automated systems that can assist caregivers in more efficiently tracking stimming activities.


Alexander Rodolfo Lara

Creating a Faradaic Efficiency Graph Dataset Using Machine Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Sumaiya Shomaji
Kevin Leonard


Abstract

Just as the internet-of-things leverages machine learning over a vast amount of data produced by an innumerable number of sensors, the Internet of Catalysis program uses similar strategies with catalysis research. One application of the Internet of Catalysis strategy is treating research papers as datapoints, rich with text, figures, and tables. Prior research within the program focused on machine learning models applied strictly over text.

This project is the first step of the program in creating a machine learning model from the images of catalysis research papers. Specifically, this project creates a dataset of faradaic efficiency graphs using transfer learning from pretrained models. The project utilizes FasterRCNN_ResNet50_FPN, LayoutLMv3SequenceClassification, and computer vision techniques to recognize figures, extract all graphs, then classify the faradaic efficiency graphs.

Downstream of this project, researchers will create a graph reading model to integrate with large language models. This could potentially lead to a multimodal model capable of fully learning from images, tables, and texts of catalysis research papers. Such a model could then guide experimentation on reaction conditions, catalysts, and production.


Amin Shojaei

Scalable and Cooperative Multi-Agent Reinforcement Learning for Networked Cyber-Physical Systems: Applications in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alex Bardas
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicles (EVs). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.

As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. First, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state.

Second, we focus on the cooperative behavior of agents in distributed MARL frameworks, particularly under the central training with decentralized execution (CTDE) paradigm. We provide theoretical results and variance analysis for stochastic and deterministic cooperative MARL algorithms, including Multi-Agent Deep Deterministic Policy Gradient (MADDPG), Multi-Agent Proximal Policy Optimization (MAPPO), and Dueling MAPPO. These analyses highlight how coordinated learning can improve system-wide decision-making in uncertain and dynamic environments like EV networks.

Third, we address the scalability challenge in large-scale NCPS by introducing a hierarchical MARL framework based on a cluster-based architecture. This framework organizes agents into coordinated subgroups, improving scalability while preserving local coordination. We conduct a detailed variance analysis of this approach to demonstrate its effectiveness in reducing communication overhead and learning complexity. This analysis establishes a theoretical foundation for scalable and efficient control in large-scale smart grid applications.


Asrith Gudivada

Custom CNN for Object State Classification in Robotic Cooking

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

This project presents the development of a custom Convolutional Neural Network (CNN) designed to classify object states—such as sliced, diced, or peeled—in robotic cooking environments. Recognizing fine-grained object states is critical for context-aware manipulation yet remains a challenging task due to the visual similarity between states and the limited availability of cooking-specific datasets. To address these challenges, we built a lightweight, non-pretrained CNN trained on a curated dataset of 11 object states. Starting with a baseline architecture, we progressively enhanced the model using data augmentation, optimized dropout, batch normalization, Inception modules, and residual connections. These improvements led to a performance increase from ~45% to ~52% test accuracy. The final model demonstrates improved generalization and training stability, showcasing the effectiveness of combining classical and advanced deep learning techniques. This work contributes toward real-time state recognition for autonomous robotic cooking systems, with implications for assistive technologies in domestic and elder care settings.


Tanvir Hossain

Gamified Learning of Computing Hardware Fundamentals Using FPGA-Based Platform

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Tamzidul Hoque, Chair
Esam El-Araby
Sumaiya Shomaji


Abstract

The growing dependence on electronic systems in consumer and mission critical domains requires engineers who understand the inner workings of digital hardware. Yet many students bypass hardware electives, viewing them as abstract, mathematics heavy, and less attractive than software courses. Escalating workforce shortages in the semiconductor industry and the recent global chip‑supply crisis highlight the urgent need for graduates who can bridge hardware knowledge gaps across engineering sectors. In this thesis, I have developed FPGA‑based games, embedded in inclusive curricular modules, which can make hardware concepts accessible while fostering interest, self‑efficacy, and positive outcome expectations in hardware engineering. A design‑based research methodology guided three implementation cycles: a pilot with seven diverse high‑school learners, a multiweek residential summer camp with high‑school students, and a fifteen‑week multidisciplinary elective enrolling early undergraduate engineering students. The learning experiences targeted binary arithmetic, combinational and sequential logic, state‑machine design, and hardware‑software co‑design. Learners also moved through the full digital‑design flow, HDL coding, functional simulation, synthesis, place‑and‑route, and on‑board verification. In addition, learners explored timing analysis, register‑transfer‑level abstractions, and simple processor datapaths to connect low‑level circuits with system‑level behavior. Mixed‑method evidence was gathered through pre‑ and post‑content quizzes, validated surveys of self‑efficacy and outcome expectations, focus groups, classroom observations, and gameplay analytics. Paired‑sample statistics showed reliable gains in hardware‑concept mastery, self‑efficacy, and outcome expectations. This work contributes a replicable framework for translating foundational hardware topics into modular, game‑based learning activities, empirical evidence of their effectiveness across secondary and early‑college contexts, and design principles for educators who seek to integrate equitable, hands‑on hardware experiences into existing curricula.


Hara Madhav Talasila

Radiometric Calibration of Radar Depth Sounder Data Products

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Patrick McCormick
James Stiles
Jilu Li
Leigh Stearns

Abstract

Although the Center for Remote Sensing of Ice Sheets (CReSIS) performs several radar calibration steps to produce Operation IceBridge (OIB) radar depth sounder data products, these datasets are not radiometrically calibrated and the swath array processing uses ideal (rather than measured [calibrated]) steering vectors. Any errors in the steering vectors, which describe the response of the radar as a function of arrival angle, will lead to errors in positioning and backscatter that subsequently affect estimates of basal conditions, ice thickness, and radar attenuation. Scientific applications that estimate physical characteristics of surface and subsurface targets from the backscatter are limited with the current data because it is not absolutely calibrated. Moreover, changes in instrument hardware and processing methods for OIB over the last decade affect the quality of inter-seasonal comparisons. Recent methods which interpret basal conditions and calculate radar attenuation using CReSIS OIB 2D radar depth sounder echograms are forced to use relative scattering power, rather than absolute methods.

As an active target calibration is not possible for past field seasons, a method that uses natural targets will be developed. Unsaturated natural target returns from smooth sea-ice leads or lakes are imaged in many datasets and have known scattering responses. The proposed method forms a system of linear equations with the recorded scattering signatures from these known targets, scattering signatures from crossing flight paths, and the radiometric correction terms. A least squares solution to optimize the radiometric correction terms is calculated, which minimizes the error function representing the mismatch in expected and measured scattering. The new correction terms will be used to correct the remaining mission data. The radar depth sounder data from all OIB campaigns can be reprocessed to produce absolutely calibrated echograms for the Arctic and Antarctic. A software simulator will be developed to study calibration errors and verify the calibration software. The software for processing natural targets and crossovers will be made available in CReSIS’s open-source polar radar software toolbox. The OIB data will be reprocessed with new calibration terms, providing to the data user community a complete set of radiometrically calibrated radar echograms for the CReSIS OIB radar depth sounder for the first time.


Christopher Ord

A Hardware-Agnostic Simultaneous Transmit And Receive (STAR) Architecture for the Transmission of Non-Repeating FMCW Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rachel Jarvis, Chair
Shannon Blunt
Patrick McCormick


Abstract

With the increasing congestion of the usable RF spectrum, it is increasingly necessary for communication and radar systems to share the same frequencies without disturbing one another. To accomplish this, research has focused on designing a class of non-repeating radar waveforms that appear as noise at the receiver of uncooperative systems, but the peak power from high-power pulsed systems can still overwhelm nearby in-band systems. Therefore, to minimize peak power while maximizing the total energy on target, radar systems must transition to operating at a 100% duty cycle, which inherently requires Simultaneous Transmit and Receive (STAR) operation.

One inherent difficulty when operating monostatic STAR systems is the direct path coupling interference that can saturate a number of components in the radar’s receive chain, which makes digital processing methods that remove this interference ineffective. This thesis proposes a method to reduce the self-interference between the radar’s transmitter in receiver prior to the receiver’s sensitive components to increase the power that the radar can transmit at. By using a combination of tests that manipulate the timing, phase, and magnitude of a secondary waveform that is injected into the radar just before the receiver, upwards of 35.0 dB of self-interference cancellation is achieved for radar waveforms with bandwidths of up to 100 MHz at both S-band and X-band in both simulation and open-air testing.


Fatima Al-Shaikhli

Optical Fiber Measurements: Leveraging Coherent FMCW Techniques

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical fiber technology have proven to be invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical fiber sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceiver systems to develop novel measurement techniques for characterizing optical fiber properties. Specifically, our goal is to leverage a digitally chirped frequency-modulated continuous wave (FMCW) to extract detailed information about optical fiber characteristics, as well as target range. Through this approach, we aim to enable more accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) self-homodyne coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection, and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.         

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Multi-target detection is demonstrated experimentally, and while only amplitude modulation is required in the LiDAR transmitter, the phase-diversity coherent receiver enables simultaneous detection of both range and velocity for each target, along with the sign of the target’s velocity.

In addition, we demonstrate a polarization-sensitive OFDR system utilizing a commercially available digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately , a chirping bandwidth, and a measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we can measure birefringence vectors along the fiber, providing not only the magnitude of birefringence but also the direction of any external pressure applied to the fiber.


Landen Doty

Assessing the Effects of Source Language on Binary Similarity Tools

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Alex Bardas
Drew Davidson

Abstract

Binary similarity is a fundamental technique that enables software analysis practitioners to compare machine-level code at scale and with fine granularity. With application in software reverse engineering, vulnerability research, malware attribution and more, state-of-the-art binary similarity tools have undergone thorough research and development to account for variations in compilers, optimizations, machine architectures, and even obfuscations. And, although these tools aim to compare and detect binary-level code segments generated from similar or identical source code, no preexisting work has investigated the effects of source languages other than C and C++. This thesis addresses this research gap by presenting a thorough investigation of SOTA binary similarity tools when applied to modern compiled languages, Rust and Golang.

To adequately evaluate the capabilities of the available binary similarity approaches, this work includes three distinct tools - BSim, a new component of the Ghidra Software Reverse Engineering Framework, which utilizes a clustering based similarity mechanism; BinDiff, an industry-recognized tool using graph-based comparisons; and jTrans, a BERT-based model fine-tuned to the binary similarity task. First, to enable this work, we introduce a new dataset of Rust and Golang binaries compiled from leading open-source projects in the Homebrew and Arch Linux repositories. Comprised of 800 binaries and over 1 million functions, this dataset was built to represent a broad range of implementation styles, application diversity, and source language features. Next, the main investigation of this thesis is presented wherein we asses each approach's ability to accurately report semantically equivalent functions compiled from the same source code. Results across the three tools reveal a systematic degradation of precision when comparing binaries produced by Rust and Go rather than those produced by C and C++. Finally, we provide a technical demonstration which highlights the implications of these results and discuss near- and long-term solutions to more adequately equip binary analysis practitioners.  
 


Past Defense Notices

Dates

Samyoga Bhattarai

‘Pro-ID: A Secure Face Recognition System using Locality Sensitive Hashing to Protect Human ID’

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

Face recognition systems are widely used in various applications, from mobile banking apps to personal smartphones. However, these systems often store biometric templates in raw form, posing significant security and privacy risks. Pro-ID addresses this vulnerability by incorporating SimHash, an algorithm of Locality Sensitive Hashing (LSH), to create secure and irreversible hash codes of facial feature vectors. Unlike traditional methods that leave raw data exposed to potential breaches, SimHash transforms the feature space into high-dimensional hash codes, safeguarding user identity while preserving system functionality. 

The proposed system creates a balance between two aspects: security and the system’s performance. Additionally, the system is designed to resist common attacks, including brute force and template inversion, ensuring that even if the hashed templates are exposed, the original biometric data cannot be reconstructed.  

A key challenge addressed in this project is minimizing the trade-off between security and performance. Extensive evaluations demonstrate that the proposed method maintains competitive accuracy rates comparable to traditional face recognition systems while significantly enhancing security metrics such as irreversibility, unlinkability, and revocability. This innovative approach contributes to advancing the reliability and trustworthiness of biometric systems, providing a secure framework for applications in face recognition systems. 


Shalmoli Ghosh

High-Power Fabry-Perot Quantum-Well Laser Diodes for Application in Multi-Channel Coherent Optical Communication Systems

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui , Chair
Shannon Blunt
Jim Stiles


Abstract

Wavelength Division Multiplexing (WDM) is essential for managing rapid network traffic growth in fiber optic systems. Each WDM channel demands a narrow-linewidth, frequency-stabilized laser diode, leading to complexity and increased energy consumption. Multi-wavelength laser sources, generating optical frequency combs (OFC), offer an attractive solution, enabling a single laser diode to provide numerous equally spaced spectral lines for enhanced bandwidth efficiency.

Quantum-dot and quantum-dash OFCs provide phase-synchronized lines with low relative intensity noise (RIN), while Quantum Well (QW) OFCs offer higher power efficiency, but they have higher RIN in the low frequency region of up to 2 GHz. However, both quantum-dot/dash and QW based OFCs, individual spectral lines exhibit high phase noise, limiting coherent detection. Output power levels of these OFCs range between 1-20 mW where the power of each spectral line is typically less than -5 dBm. Due to this requirement, these OFCs require excessive optical amplification, also they possess relatively broad spectral linewidths of each spectral line, due to the inverse relationship between optical power and linewidth as per the Schawlow-Townes formula. This constraint hampers their applicability in coherent detection systems, highlighting a challenge for achieving high-performance optical communication.

In this work, coherent system application of a single-section Quantum-Well Fabry-Perot (FP) laser diode is demonstrated. This laser delivers over 120 mW optical power at the fiber pigtail with a mode spacing of 36.14 GHz. In an experimental setup, 20 spectral lines from a single laser transmitter carry 30 GBaud 16-QAM signals over 78.3 km single-mode fiber, achieving significant data transmission rates. With the potential to support a transmission capacity of 2.15 Tb/s (4.3 Tb/s for dual polarization) per transmitter, including Forward Error Correction (FEC) and maintenance overhead, it offers a promising solution for meeting the escalating demands of modern network traffic efficiently.


TJ Barclay

Proof-Producing Translation from Gallina to CakeML

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Users of theorem provers often desire to to extract their verified code to a

  more efficient, compiled language. Coq's current extraction mechanism provides

  this facility but does not provide a formal guarantee that the extracted code

  has the same semantics as the logic it is extracted from. Providing such a

  guarantee requires a formal semantics for the target code. The CakeML

  project, implemented in HOL4, provides a formally defined syntax and semantics

  for a subset of SML and includes a proof-producing translator from

  higher-order logic to CakeML. We use the CakeML definition to develop a

  certifying extractor to CakeML from Gallina using the translation and proof techniques

  of the HOL4 CakeML translator. We also address how differences

  between HOL4 (higher-order logic) and Coq (calculus of constructions) effect

  the implementation details of the Coq translator.


Anissa Khan

Privacy Preserving Biometric Matching

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Perry Alexander, Chair
Prasad Kulkarni
Fengjun Li


Abstract

Biometric matching is a process by which distinct features are used to identify an individual. Doing so privately is important because biometric data, such as fingerprints or facial features, is not something that can be easily changed or updated if put at risk. In this study, we perform a piece of the biometric matching process in a privacy preserving manner by using secure multiparty computation (SMPC). Using SMPC allows the identifying biological data, called a template, to remain stored by the data owner during the matching process. This provides security guarantees to the biological data while it is in use and therefore reduces the chances the data is stolen. In this study, we find that performing biometric matching using SMPC is just as accurate as performing the same match in plaintext.

 


Bryan Richlinski

Prioritize Program Diversity: Enumerative Synthesis with Entropy Ordering

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Sankha Guria, Chair
Perry Alexander
Drew Davidson
Jennifer Lohoefener

Abstract

Program synthesis is a popular way to create a correct-by-construction program from a user-provided specification. 

Term enumeration is a leading technique to systematically explore the space of programs by generating terms from a formal grammar.

These terms are treated as candidate programs which are tested/verified against the specification for correctness. 

In order to prioritize candidates more likely to satisfy the specification, enumeration is often ordered by program size or other domain-specific heuristics.

However, domain-specific heuristics require expert knowledge, and enumeration by size often leads to terms comprised of frequently 

repeating symbols that are less likely to satisfy a specification. 

In this thesis, we build a heuristic that prioritizes term enumeration based on variability of individual symbols in the program, i.e., 

information entropy of the program. We use this heuristic to order programs in both top-down and bottom-up enumeration. 

We evaluated our work on a subset of the PBE-String track of the 2017 SyGuS competition benchmarks and compared against size-based enumeration. 

Top-down enumeration guided by entropy expands upon fewer partial expressions than naive in 77\% of benchmarks, 

and tests fewer complete expressions in 54\%, resulting in improved synthesis time in 40\% of benchmarks. 

However, 71\% of benchmarks in bottom-up enumeration using entropy tests fewer expressions than naive enumeration, without any improvements to the running time. 

We conclude entropy is a promising direction to prioritize candidates during program search in enumerative synthesis, 

and propose a future directions for improving performance of our proposed techniques.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

 

This research provides a deep dive into the npm-centric software supply chain, exploring various facets and phenomena that impact the security of this software supply chain. Such factors include (i) hidden code clones--which obscure provenance and can stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts open-source development practices, and (v) package compromise via malicious updates. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Jagadeesh Sai Dokku

Intelligent Chat Bot for KU Website: Automated Query Response and Resource Navigation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

This project introduces an intelligent chatbot designed to improve user experience on our university website by providing instant, automated responses to common inquiries. Navigating a university website can be challenging for students, applicants, and visitors who seek quick information about admissions, campus services, events, and more. To address this challenge, we developed a chatbot that simulates human conversation using Natural Language Processing (NLP), allowing users to find information more efficiently. The chatbot is powered by a Bidirectional Long Short-Term Memory (BiLSTM) model, an architecture well-suited for understanding complex sentence structures. This model captures contextual information from both directions in a sentence, enabling it to identify user intent with high accuracy. We trained the chatbot on a dataset of intent-labeled queries, enabling it to recognize specific intentions such as asking about campus facilities, academic programs, or event schedules. The NLP pipeline includes steps like tokenization, lemmatization, and vectorization. Tokenization and lemmatization prepare the text by breaking it into manageable units and standardizing word forms, making it easier for the model to recognize similar word patterns. The vectorization process then translates this processed text into numerical data that the model can interpret. Flask is used to manage the backend, allowing seamless communication between the user interface and the BiLSTM model. When a user submits a query, Flask routes the input to the model, processes the prediction, and delivers the appropriate response back to the user interface. This chatbot demonstrates a successful application of NLP in creating interactive, efficient, and user-friendly solutions. By automating responses, it reduces reliance on manual support and ensures users can access relevant information at any time. This project highlights how intelligent chatbots can transform the way users interact with university websites, offering a faster and more engaging experience.

 


Anahita Memar

Optimizing Protein Particle Classification: A Study on Smoothing Techniques and Model Performance

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Hossein Saiedian
Prajna Dhar


Abstract

This thesis investigates the impact of smoothing techniques on enhancing classification accuracy in protein particle datasets, focusing on both binary and multi-class configurations across three datasets. By applying methods including Averaging-Based Smoothing, Moving Average, Exponential Smoothing, Savitzky-Golay, and Kalman Smoothing, we sought to improve performance in Random Forest, Decision Tree, and Neural Network models. Initial baseline accuracies revealed the complexity of multi-class separability, while clustering analyses provided valuable insights into class similarities and distinctions, guiding our interpretation of classification challenges.

These results indicate that Averaging-Based Smoothing and Moving Average techniques are particularly effective in enhancing classification accuracy, especially in configurations with marked differences in surfactant conditions. Feature importance analysis identified critical metrics, such as IntMean and IntMax, which played a significant role in distinguishing classes. Cross-validation validated the robustness of our models, with Random Forest and Neural Network consistently outperforming others in binary tasks and showing promising adaptability in multi-class classification. This study not only highlights the efficacy of smoothing techniques for improving classification in protein particle analysis but also offers a foundational approach for future research in biopharmaceutical data processing and analysis.


Yousif Dafalla

Web-Armour: Mitigating Reconnaissance and Vulnerability Scanning with Injecting Scan-Impeding Delays in Web Deployments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
ZJ Wang

Abstract

Scanning hosts on the internet for vulnerable devices and services is a key step in numerous cyberattacks. Previous work has shown that scanning is a widespread phenomenon on the internet and commonly targets web application/server deployments. Given that automated scanning is a crucial step in many cyberattacks, it would be beneficial to make it more difficult for adversaries to perform such activity.

In this work, we propose Web-Armour, a mitigation approach to adversarial reconnaissance and vulnerability scanning of web deployments. The proposed approach relies on injecting scanning impeding delays to infrequently or rarely used portions of a web deployment. Web-Armour has two goals: First, increase the cost for attackers to perform automated reconnaissance and vulnerability scanning; Second, introduce minimal to negligible performance overhead to benign users of the deployment. We evaluate Web-Armour on live environments, operated by real users, and on different controlled (offline) scenarios. We show that Web-Armour can effectively lead to thwarting reconnaissance and internet-wide scanning.


Kabir Panahi

A Security Analysis of the Integration of Biometric Technology in the 2019 Afghan Presidential Election

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo

Abstract

Afghanistan deployed Biometric Voter Verification (BVV) technology nationally for the first time in the 2019 presidential election to address the systematic frauds in the prior elections. Through semi-structure interviews with 18 key national and international stakeholders who had an active role in this election, this study investigates the gap between intended outcomes of the BVV technology—focused on voter enfranchisement, fraud prevention, and public trust—and the reality on election day and beyond within the unique socio-political and technical landscape of Afghanistan.

Our findings reveal that while BVV technology initially promised a secure and transparent election, various technical and implementation challenges emerged, including threats for voters, staff, and officials. We found that the BVVs both supported and violated electoral goals: while they helped reduce fraud, they inadvertently disenfranchised some voters and caused delays that affected public trust. Technical limitations, usability issues, and administrative misalignments contributed to these outcomes. This study recommends critical lessons for future implementations of electoral technologies, emphasizing the importance of context-aware technological solutions and the need for robust administrative and technical frameworks to fully realize the potential benefits of election technology in fragile democracies.