Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Mahmudul Hasan
Assertion-Based Security Assessment of Hardware IP Protection MethodsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Tamzidul Hoque, ChairEsam El-Araby
Sumaiya Shomaji
Abstract
Combinational and sequential locking methods are promising solutions for protecting hardware intellectual property (IP) from piracy, reverse engineering, and malicious modifications by locking the functionality of the IP based on a secret key. To improve their security, researchers are developing attack methods to extract the secret key.
While the attacks on combinational locking are mostly inapplicable for sequential designs without access to the scan chain, the limited applicable attacks are generally evaluated against the basic random insertion of key gates. On the other hand, attacks on sequential locking techniques suffer from scalability issues and evaluation of improperly locked designs. Finally, while most attacks provide an approximately correct key, they do not indicate which specific key bits are undetermined. This thesis proposes an oracle-guided attack that applies to both combinational and sequential locking without scan chain access. The attack applies light-weight design modifications that represent the oracle using a finite state machine and applies an assertion-based query of the unlocking key. We have analyzed the effectiveness of our attack against 46 sequential designs locked with various classes of combinational locking including random, strong, logic cone-based, and anti-SAT based. We further evaluated against a sequential locking technique using 46 designs with various key sequence lengths and widths. Finally, we expand our framework to identify undetermined key bits, enabling complementary attacks on the smaller remaining key space.
Masoud Ghazikor
Distributed Optimization and Control Algorithms for UAV Networks in Unlicensed Spectrum BandsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairVictor Frost
Prasad Kulkarni
Abstract
UAVs have emerged as a transformative technology for various applications, including emergency services, delivery, and video streaming. Among these, video streaming services in areas with limited physical infrastructure, such as disaster-affected areas, play a crucial role in public safety. UAVs can be rapidly deployed in search and rescue operations to efficiently cover large areas and provide live video feeds, enabling quick decision-making and resource allocation strategies. However, ensuring reliable and robust UAV communication in such scenarios is challenging, particularly in unlicensed spectrum bands, where interference from other nodes is a significant concern. To address this issue, developing a distributed transmission control and video streaming is essential to maintaining a high quality of service, especially for UAV networks that rely on delay-sensitive data.
In this MSc thesis, we study the problem of distributed transmission control and video streaming optimization for UAVs operating in unlicensed spectrum bands. We develop a cross-layer framework that jointly considers three inter-dependent factors: (i) in-band interference introduced by ground-aerial nodes at the physical layer, (ii) limited-size queues with delay-constrained packet arrival at the MAC layer, and (iii) video encoding rate at the application layer. This framework is designed to optimize the average throughput and PSNR by adjusting fading thresholds and video encoding rates for an integrated aerial-ground network in unlicensed spectrum bands. Using consensus-based distributed algorithm and coordinate descent optimization, we develop two algorithms: (i) Distributed Transmission Control (DTC) that dynamically adjusts fading thresholds to maximize the average throughput by mitigating trade-offs between low-SINR transmission errors and queue packet losses, and (ii) Joint Distributed Video Transmission and Encoder Control (JDVT-EC) that optimally balances packet loss probabilities and video distortions by jointly adjusting fading thresholds and video encoding rates. Through extensive numerical analysis, we demonstrate the efficacy of the proposed algorithms under various scenarios.
Ganesh Nurukurti
Customer Behavior Analytics and Recommendation System for E-CommerceWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Han Wang
Abstract
In the era of digital commerce, personalized recommendations are pivotal for enhancing user experience and boosting engagement. This project presents a comprehensive recommendation system integrated into an e-commerce web application, designed using Flask and powered by collaborative filtering via Singular Value Decomposition (SVD). The system intelligently predicts and personalizes product suggestions for users based on implicit feedback such as purchases, cart additions, and search behavior.
The foundation of the recommendation engine is built on user-item interaction data, derived from the Brazilian e-commerce Olist dataset. Ratings are simulated using weighted scores for purchases and cart additions, reflecting varying degrees of user intent. These interactions are transformed into a user-product matrix and decomposed using SVD, yielding latent user and product features. The model leverages these latent factors to predict user interest in unseen products, enabling precise and scalable recommendation generation.
To further enhance personalization, the system incorporates real-time user activity. Recent search history is stored in an SQLite database and used to prioritize recommendations that align with the user’s current interests. A diversity constraint is also applied to avoid redundancy, limiting the number of recommended products per category.
The web application supports robust user authentication, product exploration by category, cart management, and checkout simulations. It features a visually driven interface with dynamic visualizations for product insights and user interactions. The home page adapts to individual preferences, showing tailored product recommendations and enabling users to explore categories and details.
In summary, this project demonstrates the practical implementation of a hybrid recommendation strategy combining matrix factorization with contextual user behavior. It showcases the importance of latent factor modeling, data preprocessing, and user-centric design in delivering an intelligent retail experience.
Srijanya Chetikaneni
Plant Disease Prediction Using Transfer LearningWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Han Wang
Abstract
Timely detection of plant diseases is critical to safeguarding crop yields and ensuring global food security. This project presents a deep learning-based image classification system to identify plant diseases using the publicly available PlantVillage dataset. The core objective was to evaluate and compare the performance of a custom-built Convolutional Neural Network (CNN) with two widely used transfer learning models—EfficientNetB0 and MobileNetV3Small.
All models were trained on augmented image data resized to 224×224 pixels, with preprocessing tailored to each architecture. The custom CNN used simple normalization, whereas EfficientNetB0 and MobileNetV3Small utilized their respective pre-processing methods to standardize the pretrained ImageNet domain inputs. To improve robustness, the training pipeline included data augmentation, class weighting, and early stopping.
Training was conducted using the Adam optimizer and categorical cross-entropy loss over 30 epochs, with performance assessed using accuracy, loss, and training time metrics. The results revealed that transfer learning models significantly outperformed the custom CNN. EfficientNetB0 achieved the highest accuracy, making it ideal for high-precision applications, while MobileNetV3Small offered a favorable balance between speed and accuracy, making it suitable for lightweight, real-time inference on edge devices.
This study validates the effectiveness of transfer learning for plant disease detection tasks and emphasizes the importance of model-specific preprocessing and training strategies. It provides a foundation for deploying intelligent plant health monitoring systems in practical agricultural environments.
Ahmet Soyyigit
Anytime Computing Techniques for LiDAR-based Perception In Cyber-Physical SystemsWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Heechul Yun, ChairMichael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri
Abstract
The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advances in artificial intelligence (AI), particularly the introduction of deep neural networks (DNN), have significantly improved the autonomy of CPS, notably by boosting perception capabilities.
CPS perception aims to discern, classify, and track objects of interest in the operational environment, a task that is considerably challenging for computers in a three-dimensional (3D) space. For this task, the use of LiDAR sensors and processing their readings with DNNs has become popular because of their excellent performance However, in CPS such as self-driving cars and drones, object detection must be not only accurate but also timely, posing a challenge due to the high computational demand of LiDAR object detection DNNs. Satisfying this demand is particularly challenging for on-board computational platforms due to size, weight, and power constraints. Therefore, a trade-off between accuracy and latency must be made to ensure that both requirements are satisfied. Importantly, the required trade-off is operational environment dependent and should be weighted more on accuracy or latency dynamically at runtime. However, LiDAR object detection DNNs cannot dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). Prior research aimed at anytime computing for object detection DNNs using camera images is not applicable to LiDAR-based detection due to architectural differences. This thesis addresses these challenges by proposing three novel techniques: Anytime-LiDAR, which enables early termination with reasonable accuracy; VALO (Versatile Anytime LiDAR Object Detection), which implements deadline-aware input data scheduling; and MURAL (Multi-Resolution Anytime Framework for LiDAR Object Detection), which introduces dynamic resolution scaling. Together, these innovations enable LiDAR-based object detection DNNs to make effective trade-offs between latency and accuracy under varying operational conditions, advancing the practical deployment of LiDAR object detection DNNs.
Rahul Purswani
Finetuning Llama on custom data for QA tasksWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairDrew Davidson
Prasad Kulkarni
Abstract
Fine-tuning large language models (LLMs) for domain-specific use cases, such as question answering, offers valuable insights into how their performance can be tailored to specialized information needs. In this project, we focused on the University of Kansas (KU) as our target domain. We began by scraping structured and unstructured content from official KU webpages, covering a wide array of student-facing topics including campus resources, academic policies, and support services. From this content, we generated a diverse set of question-answer pairs to form a high-quality training dataset. LLaMA 3.2 was then fine-tuned on this dataset to improve its ability to answer KU-specific queries with greater relevance and accuracy. Our evaluation revealed mixed results—while the fine-tuned model outperformed the base model on most domain-specific questions, the original model still had an edge in handling ambiguous or out-of-scope prompts. These findings highlight the strengths and limitations of domain-specific fine-tuning, and provide practical takeaways for customizing LLMs for real-world QA applications.
Rithvij Pasupuleti
A Machine Learning Framework for Identifying Bioinformatics Tools and Database Names in Scientific LiteratureWhen & Where:
LEEP2, Room 2133
Committee Members:
Cuncong Zhong, ChairDongjie Wang
Han Wang
Zijun Yao
Abstract
The absence of a single, comprehensive database or repository cataloging all bioinformatics databases and software creates a significant barrier for researchers aiming to construct computational workflows. These workflows, which often integrate 10–15 specialized tools for tasks such as sequence alignment, variant calling, functional annotation, and data visualization, require researchers to explore diverse scientific literature to identify relevant resources. This process demands substantial expertise to evaluate the suitability of each tool for specific biological analyses, alongside considerable time to understand their applicability, compatibility, and implementation within a cohesive pipeline. The lack of a central, updated source leads to inefficiencies and the risk of using outdated tools, which can affect research quality and reproducibility. Consequently, there is a critical need for an automated, accurate tool to identify bioinformatics databases and software mentions directly from scientific texts, streamlining workflow development and enhancing research productivity.
The bioNerDS system, a prior effort to address this challenge, uses a rule-based named entity recognition (NER) approach, achieving an F1 score of 63% on an evaluation set of 25 articles from BMC Bioinformatics and PLoS Computational Biology. By integrating the same set of features such as context patterns, word characteristics and dictionary matches into a machine learning model, we developed an approach using an XGBoost classifier. This model, carefully tuned to address the extreme class imbalance inherent in NER tasks through synthetic oversampling and refined via systematic hyperparameter optimization to balance precision and recall, excels at capturing complex linguistic patterns and non-linear relationships, ensuring robust generalization. It achieves an F1 score of 82% on the same evaluation set, significantly surpassing the baseline. By combining rule-based precision with machine learning adaptability, this approach enhances accuracy, reduces ambiguities, and provides a robust tool for large-scale bioinformatics resource identification, facilitating efficient workflow construction. Furthermore, this methodology holds potential for extension to other technological domains, enabling similar resource identification in fields like data science, artificial intelligence, or computational engineering.
Vishnu Chowdary Madhavarapu
Automated Weather Classification Using Transfer LearningWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
David Johnson, ChairPrasad Kulkarni
Dongjie Wang
Abstract
This project presents an automated weather classification system utilizing transfer learning with pre-trained convolutional neural networks (CNNs) such as VGG19, InceptionV3, and ResNet50. Designed to classify weather conditions—sunny, cloudy, rainy, and sunrise—from images, the system addresses the challenge of limited labeled data by applying data augmentation techniques like zoom, shear, and flip, expanding the dataset images. By fine-tuning the final layers of pre-trained models, the solution achieves high accuracy while significantly reducing training time. VGG19 was selected as the baseline model for its simplicity, strong feature extraction capabilities, and widespread applicability in transfer learning scenarios. The system was trained using the Adam optimizer and evaluated on key performance metrics including accuracy, precision, recall, and F1 score. To enhance user accessibility, a Flask-based web interface was developed, allowing real-time image uploads and instant weather classification. The results demonstrate that transfer learning, combined with robust data preprocessing and fine-tuning, can produce a lightweight and accurate weather classification tool. This project contributes toward scalable, real-time weather recognition systems that can integrate into IoT applications, smart agriculture, and environmental monitoring.
RokunuzJahan Rudro
Using Machine Learning to Classify Driver Behavior from Psychological Features: An Exploratory StudyWhen & Where:
Eaton Hall, Room 1A
Committee Members:
Sumaiya Shomaji, ChairDavid Johnson
Zijun Yao
Alexandra Kondyli
Abstract
Driver inattention and human error are the primary causes of traffic crashes. However, little is known about the relationship between driver aggressiveness and safety. Although several studies that group drivers into different classes based on their driving performance have been conducted, little has been done to explore how behavioral traits are linked to driver behavior. The study aims to link different driver profiles, assessed through psychological evaluations, with their likelihood of engaging in risky driving behaviors, as measured in a driving simulation experiment. By incorporating psychological factors into machine learning algorithms, our models were able to successfully relate self-reported decision-making and personality characteristics with actual driving actions. Our results hold promise toward refining existing models of driver behavior by understanding the psychological and behavioral characteristics that influence the risk of crashes.
Md Mashfiq Rizvee
Energy Optimization in Multitask Neural Networks through Layer SharingWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
Han Wang
Abstract
Artificial Intelligence (AI) is being widely used in diverse domains such as industrial automation, traffic control, precision agriculture, and smart cities for major heavy lifting in terms of data analysis and decision making. However, the AI life- cycle is a major source of greenhouse gas (GHG) emission leading to devastating environmental impact. This is due to expensive neural architecture searches, training of countless number of models per day across the world, in-field AI processing of data in billions of edge devices, and advanced security measures across the AI life cycle. Modern applications often involve multitasking, which involves performing a variety of analyzes on the same dataset. These tasks are usually executed on resource-limited edge devices, necessitating AI models that exhibit efficiency across various measures such as power consumption, frame rate, and model size. To address these challenges, we introduce a novel neural network architecture model that incorporates a layer sharing principle to optimize the power usage. We propose a novel neural architecture, Layer Shared Neural Networks that merges multiple similar AI/NN tasks together (with shared layers) towards creating a single AI/NN model with reduced energy requirements and carbon footprint. The experimental findings reveal competitive accuracy and reduced power consumption. The layer shared model significantly reduces power consumption by 50% during training and 59.10% during inference causing as much as an 84.64% and 87.10% decrease in CO2 emissions respectively.
Fairuz Shadmani Shishir
Parameter-Efficient Computational Drug Discovery using Deep LearningWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
Hongyang Sun
Abstract
The accurate prediction of small molecule binding affinity and toxicity remains a central challenge in drug discovery, with significant implications for reducing development costs, improving candidate prioritization, and enhancing safety profiles. Traditional computational approaches, such as molecular docking and quantitative structure-activity relationship (QSAR) models, often rely on handcrafted features and require extensive domain knowledge, which can limit scalability and generalization to novel chemical scaffolds. Recent advances in language models (LMs), particularly those adapted to chemical representations such as SMILES (Simplified Molecular Input Line Entry System), have opened new ways for learning data-driven molecular representations that capture complex structural and functional properties. However, achieving both high binding affinity and low toxicity through a resource-efficient computational pipeline is inherently difficult due to the multi-objective nature of the task. This study presents a novel dual-paradigm approach to critical challenges in drug discovery: predicting small molecules with high binding affinity and low cardiotoxicity profiles. For binding affinity prediction, we implement a specialized graph neural network (GNN) architecture that operates directly on molecular structures represented as graphs, where atoms serve as nodes and bonds as edges. This topology-aware approach enables the model to capture complex spatial arrangements and electronic interactions critical for protein-ligand binding. For toxicity prediction, we leverage chemical language models (CLMs) fine-tuned with Low-Rank Adaptation (LoRA), allowing efficient adaptation of large pre-trained models to specialized toxicological endpoints while maintaining the generalized chemical knowledge embedded in the base model. Our hybrid methodology demonstrates significant improvements over existing computational approaches, with the GNN component achieving an average area under the ROC curve (AUROC) of 0.92 on three protein targets and the LoRA-adapted CLM reaching (AUROC) of 0.90 with 60% reduction in parameter usage in predicting cardiotoxicity. This work establishes a powerful computational framework that accelerates drug discovery by enabling both higher binding affinity and low toxicity compounds with optimized efficacy and safety profiles.
Soma Pal
Truths about compiler optimization for state-of-the-art (SOTA) C/C++ compilersWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Prasad Kulkarni, ChairEsam El-Araby
Drew Davidson
Tamzidul Hoque
Jiang Yunfeng
Abstract
Compiler optimizations are critical for performance and have been extensively studied, especially for C/C++ language compilers. Our overall goal in this thesis is to investigate and compare the properties and behavior of optimization passes across multiple contemporary, state-of-the-art (SOTA) C/C++ compilers to understand if they adopt similar optimization implementation and orchestration strategies. Given the maturity of pre-existing knowledge in the field, it seems conceivable that different compiler teams will adopt consistent optimization passes, pipeline and application techniques. However, our preliminary results indicate that such expectation may be misguided. If so, then we will attempt to understand the differences, and study and quantify their impact on the performance of generated code.
In our first work, we study and compare the behavior of profile-guided optimizations (PGO) in two popular SOTA C/C++ compilers, GCC and Clang. This study reveals many interesting, and several counter-intuitive, properties about PGOs in C/C++ compilers. The behavior and benefits of PGOs also vary significantly across our selected compilers. We present our observations, along with plans to further explore these inconsistencies in this report. Likewise, we have also measured noticeable differences in the performance delivered by optimizations across our compilers. We propose to explore and understand these differences in this work. We present further details regarding our proposed directions and planned experiments in this report. We hope that this work will show and suggest opportunities for compilers to learn from each other and motivate researchers to find mechanisms to combine the benefits of multiple compilers to deliver higher overall program performance.
Nyamtulla Shaik
AI Vision to Care: A QuadView of Deep Learning for Detecting Harmful Stimming in AutismWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairBo Luo
Dongjie Wang
Abstract
Stimming refers to repetitive actions or behaviors used to regulate sensory input or express feelings. Children with developmental disorders like autism (ASD) frequently perform stimming. This includes arm flapping, head banging, finger flicking, spinning, etc. This is exhibited by 80-90% of children with Autism, which is seen in 1 among 36 children in the US. Head banging is one of these self-stimulatory habits that can be harmful. If these behaviors are automatically identified and notified using live video monitoring, parents and other caregivers can better watch over and assist children with ASD.
Classifying these actions is important to recognize harmful stimming, so this study focuses on developing a deep learning-based approach for stimming action recognition. We implemented and evaluated four models leveraging three deep learning architectures based on Convolutional Neural Networks (CNNs), Autoencoders, and Vision Transformers. For the first time in this area, we use skeletal joints extracted from video sequences. Previous works relied solely on raw RGB videos, vulnerable to lighting and environmental changes. This research explores Deep Learning based skeletal action recognition and data processing techniques for a small unstructured dataset that consists of 89 home recorded videos collected from publicly available sources like YouTube. Our robust data cleaning and pre-processing techniques helped the integration of skeletal data in stimming action recognition, which performed better than state-of-the-art with a classification accuracy of up to 87%
In addition to using traditional deep learning models like CNNs for action recognition, this study is among the first to apply data-hungry models like Vision Transformers (ViTs) and Autoencoders for stimming action recognition on the dataset. The results prove that using skeletal data reduces the processing time and significantly improves action recognition, promising a real-time approach for video monitoring applications. This research advances the development of automated systems that can assist caregivers in more efficiently tracking stimming activities.
Alexander Rodolfo Lara
Creating a Faradaic Efficiency Graph Dataset Using Machine LearningWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Zijun Yao, ChairSumaiya Shomaji
Kevin Leonard
Abstract
Just as the internet-of-things leverages machine learning over a vast amount of data produced by an innumerable number of sensors, the Internet of Catalysis program uses similar strategies with catalysis research. One application of the Internet of Catalysis strategy is treating research papers as datapoints, rich with text, figures, and tables. Prior research within the program focused on machine learning models applied strictly over text.
This project is the first step of the program in creating a machine learning model from the images of catalysis research papers. Specifically, this project creates a dataset of faradaic efficiency graphs using transfer learning from pretrained models. The project utilizes FasterRCNN_ResNet50_FPN, LayoutLMv3SequenceClassification, and computer vision techniques to recognize figures, extract all graphs, then classify the faradaic efficiency graphs.
Downstream of this project, researchers will create a graph reading model to integrate with large language models. This could potentially lead to a multimodal model capable of fully learning from images, tables, and texts of catalysis research papers. Such a model could then guide experimentation on reaction conditions, catalysts, and production.
Amin Shojaei
Scalable and Cooperative Multi-Agent Reinforcement Learning for Networked Cyber-Physical Systems: Applications in Smart GridsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairAlex Bardas
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri
Abstract
Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicles (EVs). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.
As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. First, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state.
Second, we focus on the cooperative behavior of agents in distributed MARL frameworks, particularly under the central training with decentralized execution (CTDE) paradigm. We provide theoretical results and variance analysis for stochastic and deterministic cooperative MARL algorithms, including Multi-Agent Deep Deterministic Policy Gradient (MADDPG), Multi-Agent Proximal Policy Optimization (MAPPO), and Dueling MAPPO. These analyses highlight how coordinated learning can improve system-wide decision-making in uncertain and dynamic environments like EV networks.
Third, we address the scalability challenge in large-scale NCPS by introducing a hierarchical MARL framework based on a cluster-based architecture. This framework organizes agents into coordinated subgroups, improving scalability while preserving local coordination. We conduct a detailed variance analysis of this approach to demonstrate its effectiveness in reducing communication overhead and learning complexity. This analysis establishes a theoretical foundation for scalable and efficient control in large-scale smart grid applications.
Asrith Gudivada
Custom CNN for Object State Classification in Robotic CookingWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
David Johnson, ChairPrasad Kulkarni
Dongjie Wang
Abstract
This project presents the development of a custom Convolutional Neural Network (CNN) designed to classify object states—such as sliced, diced, or peeled—in robotic cooking environments. Recognizing fine-grained object states is critical for context-aware manipulation yet remains a challenging task due to the visual similarity between states and the limited availability of cooking-specific datasets. To address these challenges, we built a lightweight, non-pretrained CNN trained on a curated dataset of 11 object states. Starting with a baseline architecture, we progressively enhanced the model using data augmentation, optimized dropout, batch normalization, Inception modules, and residual connections. These improvements led to a performance increase from ~45% to ~52% test accuracy. The final model demonstrates improved generalization and training stability, showcasing the effectiveness of combining classical and advanced deep learning techniques. This work contributes toward real-time state recognition for autonomous robotic cooking systems, with implications for assistive technologies in domestic and elder care settings.
Tanvir Hossain
Gamified Learning of Computing Hardware Fundamentals Using FPGA-Based PlatformWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Tamzidul Hoque, ChairEsam El-Araby
Sumaiya Shomaji
Abstract
The growing dependence on electronic systems in consumer and mission critical domains requires engineers who understand the inner workings of digital hardware. Yet many students bypass hardware electives, viewing them as abstract, mathematics heavy, and less attractive than software courses. Escalating workforce shortages in the semiconductor industry and the recent global chip‑supply crisis highlight the urgent need for graduates who can bridge hardware knowledge gaps across engineering sectors. In this thesis, I have developed FPGA‑based games, embedded in inclusive curricular modules, which can make hardware concepts accessible while fostering interest, self‑efficacy, and positive outcome expectations in hardware engineering. A design‑based research methodology guided three implementation cycles: a pilot with seven diverse high‑school learners, a multiweek residential summer camp with high‑school students, and a fifteen‑week multidisciplinary elective enrolling early undergraduate engineering students. The learning experiences targeted binary arithmetic, combinational and sequential logic, state‑machine design, and hardware‑software co‑design. Learners also moved through the full digital‑design flow, HDL coding, functional simulation, synthesis, place‑and‑route, and on‑board verification. In addition, learners explored timing analysis, register‑transfer‑level abstractions, and simple processor datapaths to connect low‑level circuits with system‑level behavior. Mixed‑method evidence was gathered through pre‑ and post‑content quizzes, validated surveys of self‑efficacy and outcome expectations, focus groups, classroom observations, and gameplay analytics. Paired‑sample statistics showed reliable gains in hardware‑concept mastery, self‑efficacy, and outcome expectations. This work contributes a replicable framework for translating foundational hardware topics into modular, game‑based learning activities, empirical evidence of their effectiveness across secondary and early‑college contexts, and design principles for educators who seek to integrate equitable, hands‑on hardware experiences into existing curricula.
Hara Madhav Talasila
Radiometric Calibration of Radar Depth Sounder Data ProductsWhen & Where:
Nichols Hall, Room 317 (Richard K. Moore Conference Room)
Committee Members:
Carl Leuschen, ChairPatrick McCormick
James Stiles
Jilu Li
Leigh Stearns
Abstract
Although the Center for Remote Sensing of Ice Sheets (CReSIS) performs several radar calibration steps to produce Operation IceBridge (OIB) radar depth sounder data products, these datasets are not radiometrically calibrated and the swath array processing uses ideal (rather than measured [calibrated]) steering vectors. Any errors in the steering vectors, which describe the response of the radar as a function of arrival angle, will lead to errors in positioning and backscatter that subsequently affect estimates of basal conditions, ice thickness, and radar attenuation. Scientific applications that estimate physical characteristics of surface and subsurface targets from the backscatter are limited with the current data because it is not absolutely calibrated. Moreover, changes in instrument hardware and processing methods for OIB over the last decade affect the quality of inter-seasonal comparisons. Recent methods which interpret basal conditions and calculate radar attenuation using CReSIS OIB 2D radar depth sounder echograms are forced to use relative scattering power, rather than absolute methods.
As an active target calibration is not possible for past field seasons, a method that uses natural targets will be developed. Unsaturated natural target returns from smooth sea-ice leads or lakes are imaged in many datasets and have known scattering responses. The proposed method forms a system of linear equations with the recorded scattering signatures from these known targets, scattering signatures from crossing flight paths, and the radiometric correction terms. A least squares solution to optimize the radiometric correction terms is calculated, which minimizes the error function representing the mismatch in expected and measured scattering. The new correction terms will be used to correct the remaining mission data. The radar depth sounder data from all OIB campaigns can be reprocessed to produce absolutely calibrated echograms for the Arctic and Antarctic. A software simulator will be developed to study calibration errors and verify the calibration software. The software for processing natural targets and crossovers will be made available in CReSIS’s open-source polar radar software toolbox. The OIB data will be reprocessed with new calibration terms, providing to the data user community a complete set of radiometrically calibrated radar echograms for the CReSIS OIB radar depth sounder for the first time.
Past Defense Notices
Brian McClannahan
Classification of Noncoding RNA Families using Deep Convolutional Neural NetworkWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Cuncong Zhong, ChairPrasad Kulkarni
Bo Luo
Richard Wang
Abstract
In the last decade, the discovery of noncoding RNA (ncRNA) has exploded. Classifying these ncRNA is critical to determining their function. This thesis proposes a new method employing deep convolutional neural networks (CNNs) to classify ncRNA sequences. To this end, this thesis first proposes an efficient approach to convert the RNA sequences into images characterizing their base-pairing probability. As a result, classifying RNA sequences is converted to an image classification problem that can be efficiently solved by available CNN-based classification models. This thesis also considers the folding potential of the ncRNAs in addition to their primary sequence. Based on the proposed approach, a benchmark image classification dataset is generated from the RFAM database of ncRNA sequences. In addition, three classical CNN models and three Siamese network models have been implemented and compared to demonstrate the superior performance and efficiency of the proposed approach. Extensive experimental results show the great potential of using deep learning approaches for RNA classification.
Waqar Ali
Deterministic Scheduling of Real-Time Tasks on Heterogeneous Multicore PlatformsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Heechul Yun, ChairEsam Eldin Mohamed Aly
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri
Abstract
In recent years, the problem of real-time scheduling has increasingly become more important as well as more complicated. The former is due to the proliferation of safety critical systems into our day-to-day life and the latter is caused by the escalating demand for high performance which is driving the multicore architecture towards consolidation of various kinds of heterogeneous computing resources into smaller and smaller SoCs. Motivated by these trends, this dissertation tackles the following fundamental question: how can we guarantee predictable real-time execution while preserving high utilization on heterogeneous multicore SoCs?
This dissertation presents new real-time scheduling techniques for predictable and efficient scheduling of mixed criticality workloads on heterogeneous SoCs. The contributions of this dissertation include the following: 1) a novel CPU-GPU scheduling framework, called BWLOCK++, that ensures predictable execution of critical GPU kernels on integrated CPU-GPU platforms 2) a novel gang scheduling framework called RT-Gang, which guarantees deterministic execution of parallel real-time tasks on the multicore CPU cluster of a heterogeneous SoC. 3) optimal and heuristic algorithms for gang formation that increase real-time schedulability under the RT-Gang framework and their extension to incorporate scheduling on accelerators in a heterogenous SoC. 4) A case-study evaluation using an open-source autonomous driving application that demonstrates the analytical and practical benefits of the proposed scheduling techniques.
Josiah Gray
Implementing TPM Commands in the Copland Remote Attestation LanguageWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Perry Alexander, ChairAndy Gill
Bo Luo
Abstract
So much of what we do on a daily basis is dependent on computers: email, social media, online gaming, banking, online shopping, virtual conference calls, and general web browsing to name a few. Most of the devices we depend on for these services are computers or servers that we do not own, nor do we have direct physical access to. We trust the underlying network to provide access to these devices remotely. But how do we know which computers/servers are safe to access, or verify that they are who they claim to be? How do we know that a distant server has not been hacked and compromised in some way?
Remote attestation is a method for establishing trust between remote systems. An "appraiser" can request information from a "target" system. The target responds with "evidence" consisting of run-time measurements, configuration information, and/or cryptographic information (i.e. hashes, keys, nonces, or other shared secrets). The appraiser can then evaluate the returned evidence to confirm the identity of the remote target, as well as determine some information about the operational state of the target, to decide whether or not the target is trustworthy.
A tool that may prove useful in remote attestation is the TPM, or "Trusted Platform Module". The TPM is a dedicated microcontroller that comes built-in to nearly all PC and laptop systems produced today. The TPM is used as a root of trust for storage and reporting, primarily through integrated cryptographic keys. This root of trust can then be used to assure the integrity of stored data or the state of the system itself. In this thesis, I will explore the various functions of the TPM and how they may be utilized in the development of the remote attestation language, "Copland".
Gordon Ariho
Multipass SAR Processing for Ice Sheet Vertical Velocity and Tomography Measurements and Application of Reduced Rank MMSE to Spectrally Efficient Radar DesignWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Jim Stiles, ChairJohn Paden (Co-Chair)
Shannon Blunt
Carl Leuschen
Emily Arnold
Abstract
First Topic: Ice sheets impact sea-level change and hence their response to climatic variations needs to be continually monitored and studied. We propose to apply multipass differential interferometric synthetic aperture radar (DInSAR) techniques to data from the Multichannel Coherent Radar Depth Sounder (MCoRDS) to measure the vertical displacement of englacial layers within an ice sheet. DInSAR’s accuracy is usually on the order of a small fraction of the wavelength (e.g. millimeter to centimeter precision is common) in monitoring ground displacement along the radar line of sight (LOS). In the case of ice sheet internal layers, vertical displacement is estimated by compensating for the spatial baseline using precise trajectory information and estimates of the cross-track layer slope from direction of arrival analysis. Preliminary results from a high accumulation region near Camp Century in northwest Greenland and Summit Station in central Greenland are presented here. We propose to extend this work by implementing a maximum likelihood estimator that jointly estimates the vertical velocity, the cross-track internal layer slope, and the unknown baseline error due to GPS and INS errors. The multipass algorithm will be applied to additional flights from the decade long NASA Operation IceBridge airborne mission that flew MCoRDS on many repeated flight tracks. We also propose to improve the accuracy of tomographic swaths produced from multipass measurements and investigate the possibility to use focusing matrices to improve wideband tomographic processing.
Second Topic: With the increased demand for bandwidth-hungry applications in the telecommunications industry, radar applications can no longer enjoy the generous frequency allocations within the UHF band. Spectral efficiency, if achievable, leads to the freeing of portions of the radar bandwidth to facilitate spectrum sharing between radar and other wireless systems. A decrease in bandwidth leads to worse radar resolution. In certain scenarios, reduced resolution is acceptable, and bandwidth may be compromised for spectral efficiency. An iterative reduced rank MMSE algorithm based on marginal Fisher information is proposed and investigated to minimize the loss of resolution with the tradeoff of degraded side-lobe performance. The algorithm is applied to the radar measurement model with simulated range profiles and performance results discussed.
Kishanram Kaje
Complex Field Modulation in Direct Detection SystemsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Rongqing Hui, ChairChristopher Allen
Victor Frost
Erik Perrins
Jie Han
Abstract
Even though fiber optics communication is providing a high bandwidth channel to achieve high speed data transmission, there is still a need for higher spectral efficiency, faster data processing speeds while reduced resource requirements due to ever increasing data and media traffic. Various multilevel modulation and demodulation techniques are used to improve spectral efficiency. Although, spectral efficiency is improved, there are other challenges that arise while doing so such as requirement for high speed electronics, receiver sensitivity, chromatic dispersion, operational flexibility etc. Here, we investigate complex high speed field modulation techniques in direct detection systems to improve spectral efficiency while focusing to reduce resources required for implementation, compensating for linear and nonlinear impairments in fiber optics communication systems.
We first demonstrated a digital-analog hybrid subcarrier multiplexing (SCM) technique which can reduce the requirement of high speed electronics such as ADC and DAC, while providing wideband capability, high spectral efficiency, operational flexibility and controllable data-rate granularity.
With conventional Quadrature Phase Shift Keying (QPSK), to achieve maximum spectral efficiency, we need high spectral efficient Nyquist filters which takes high FPGA resources for digital signal processing (DSP). Hence, we investigated Quadrature Duobinary (QDB) modulation as a solution to reduce the FPGA resources required for DSP while achieving spectral efficiency of 2bits/s/Hz. Currently we are investigating all analog single sideband (SSB) complex field modulated direct detection system. Here, we are trying to achieve higher spectral efficiency by using QDB modulation scheme in comparison to QPSK while avoiding signal-signal beat interference (SSBI) by providing a guard-band based approach.
Another topic we investigated, both through simulation and experiments, is a way to compensate for nonlinearities generated by semiconductor optical amplifiers (SOA) when operated in gain saturation in a field modulated direct detection systems. We successfully, compensated for the SOA nonlinearities in the presence of fiber chromatic dispersion, which was post compensated using electronic dispersion compensation after restoring the phase information of the received signal using Kramers-Kronig receiver.
Theresa Moore
Array Manifold Calibration for Multichannel SAR SoundersWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
James Stiles, ChairJohn Paden (Co-Chair)
Shannon Blunt
Carl Leuschen
Leigh Stearns
Abstract
Multichannel synthetic aperture radar (SAR) sounders with cross-track antenna arrays map ice sheet basal morphology in three dimensions with a single pass using tomography. The tomographic ice-sheet imaging method leverages parametric direction-finding techniques like the Maximum Likelihood Estimator and the Multiple Signal Classification algorithm to resolve scattering interfaces in elevation. These techniques have received considerable attention because of their potential to exceed the Rayleigh resolution limit of the receive array under certain conditions. This performance is predicated on having perfect knowledge of the frequency-dependent response of the array to directional sources, referred to as the array manifold. Even modest amounts of mismatch between the assumed and actual manifold model degrade the accuracy of parametric angle estimators and erode their sought-after superresolution potential.
Array manifold calibration refers to the step in the array processor of refining our representation of the directional array-response vectors by accounting for factors such as mutual coupling, geometric uncertainties, and channel-to-channel gain imbalances. Pilot calibration requires measuring the in-situ array over its field of view and storing the manifold in a look-up-table. Alternatively, the array transfer function may be modeled parametrically to levy an estimation framework for characterizing mismatch. Parametric calibration theory for sensor position perturbations has been established for several decades. However, there remains a marked disconnect between the signal processing and antennas communities regarding how to include mutual coupling within the parametric framework. To date, literature lacks validated studies that address parameterization of the embedded element patterns for direction-finding arrays.
A manifold calibration methodology is proposed for an airborne, multichannel ice-penetrating SAR. The methodology departs from conventional approaches by extracting calibration targets from SAR imagery of well-understood terrain to empirically characterize the directional responses of the integrated array's embedded element patterns. This work presents a Maximum Likelihood Estimator for nonlinear parameters common across disjoint calibration sets that has the potential to improve the accuracy of our estimated geometric uncertainties by increasing the total Fisher information in our observations. The investigation contributes to specific gaps in array signal processing and remote sensing literature by treating the unique challenge of calibrating in-situ arrays used in direction-finding applications.
Dung Viet Nguyen
Particle Swarm Deep Reinforcement Learning for Base Station Optimization in Urban AreasWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Taejoon Kim, ChairMorteza Hashemi
Heechul Yun
Abstract
Densifying the network by deploying many small cells has attracted significant interests from wireless industries for exploring its potential to facilitating the proposed many data-intensive use cases in fifth-generation (5G) networks. While such efforts are essential, there are gaps in fundamental research and practical deployment of small cells. It is clear that increased interference from adjacent cells, called intercell interference, is the major limiting factor. In order to address this issue, each base station's parameters should be properly controlled to mitigate the intercell interference. We call the task of designing the base station's parameters the base station optimization (BSO) problem in this work. Due to the large numbers of small cells and mobile users distributed over the network, solving BSO by precisely modeling the network conditions is almost infeasible. One of the popular approaches that has attracted many researchers recently is a data-based framework called machine learning (ML). While supervised ML is prevalent, it requires pre-labeled off-line data that are not available in many wireless scenarios. Unlike supervised ML, reinforcement learning (RL) can handle this situation because it is based on designing a good policy to find the best exploration-\&-exploitation tradeoff without the pre-labeled training dataset. Thus, in this work, we present a new approach to the problem of BSO, based on the application of deep reinforcement learning (DRL) to enhance the quality of service (QoS) experienced by mobile users. To speed up the exploration of DRL, we employ particle swarm optimization (PSO), which shows improved QoS and convergence compared to conventional DRL.
Dalton Hahn
Delving Into DevOps: Examining the Security Posture of State-of-Art Service Mesh ToolsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Alex Bardas, ChairDrew Davidson
Fengjun Li
Abstract
Explosion in the use of containers and a shift in software engineering design from monolithic applications into a microservice model has driven a need for software solutions that can manage the deployment and networking of microservices at enterprise-level scale. Service meshes have emerged as a promising solution to the microservice eruption that enterprise software is currently experiencing. This work examines service meshes from the perspective of security solutions offered within the tools and how the available security mechanisms impact the original goals of service meshes. As part of this study, we propose a relevant threat model to the service mesh domain and consider two different levels of configuration of these tools. The first configuration we study is the “idealized” configuration; one in which a system administrator has deep knowledge and the ability to properly configure and enable all available security mechanisms within a service mesh. The second configuration scenario is that of the default configuration deployment of service meshes. Through this work, we consider a range of adversarial approaches and scenarios that comprehensively cover the available attack surface of service meshes. Our experimental results show a distinct lack in security support of service meshes when deployed under default configurations, and additionally, in many idealized scenarios studied, it is possible for attackers to achieve some of their adversarial goals, presenting tempting targets to attackers.
Calen Carabajal
Development of Compact UWB Transmit Receive Modules and Filters on Liquid Crystal Polymer for RadarWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Carl Leuschen, ChairFernando Rodriguez-Morales (Co-Chair)
Christopher Allen
Abstract
Microwave and mm-wave radars have been used extensively for remote sensing applications, and ultra-wideband (UWB) radars have provided particular utility in geophysical research due to their ability to resolve narrowly-spaced targets or media interface levels. With increased availability of unmanned aerial vehicles (UAS) and expanded application of microwave radars into other realms requiring portability, miniaturization of radar systems is a crucial goal. This thesis presents the design and development of various microwave components for a compact, airborne snow-probing radar with multi-gigahertz bandwidth and cm-scale vertical resolution.
First, a set of UWB, compact transmit and receive modules with custom power sequencing circuits is presented. These modules were rapid-prototyped as an initial step toward the miniaturization of the radar’s front-end, using a combination of custom and COTS circuits. The transmitter and receiver modules operate in the 2–18 GHz range. Laboratory and field tests are discussed, demonstrating performance that is comparable to previous, connectorized implementations of the system, while accomplishing a 5:1 size reduction.
Next, a set of miniaturized band-pass and low-pass filters is developed and demonstrated. This work addressed the lack of COTS circuits with adequate performance in a sufficiently small form factor that is compatible with the planar integration required in a multi-chip module.
The filters presented here were designed for manufacture on a multi-layer liquid crystal polymer (LCP) substrate. A detailed trade study to assess the effects of potential manufacturing tolerances is presented. A framework for the automated creation of panelized design variations was developed using CAD tools. Thirty-two design variations with two different types of launches (microstrip and grounded co-planar waveguide) were successfully simulated, fabricated and tested, showing good electrical performance both as individual filters and cascaded to offer outstanding out-of-band rejection. The size of the new filters is 1 cm x 1 cm x 150 µm, a vertical reduction of over 90% and a reduction of total cascaded length by over 80%.
Kunal Karnik
Augment drone GPS telemetry data onto its Optical Flow linesWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Andy Gill, ChairDrew Davidson
Prasad Kulkarni
Abstract
Optical flow is the apparent displacement of objects, surfaces and edges in a visual scene caused because of the relative motion between an observer and the scene. This apparent displacement (parallax) is used to render optical flow lines for such objects which hold invaluable information about their motion. In this research, we apply this technique to study a video file. We will locate pixels of objects with strong optical flow displacements. Which will enable us to identify an aerial multirotor craft (drone) from possible object pixels. Further we will not only mark the drones path using optical flow lines, but we will also add value to the video file by augmenting the drone’s 3D Telemetry data onto its optical flow lines.