Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Mahmudul Hasan
Assertion-Based Security Assessment of Hardware IP Protection MethodsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Tamzidul Hoque, ChairEsam El-Araby
Sumaiya Shomaji
Abstract
Combinational and sequential locking methods are promising solutions for protecting hardware intellectual property (IP) from piracy, reverse engineering, and malicious modifications by locking the functionality of the IP based on a secret key. To improve their security, researchers are developing attack methods to extract the secret key.
While the attacks on combinational locking are mostly inapplicable for sequential designs without access to the scan chain, the limited applicable attacks are generally evaluated against the basic random insertion of key gates. On the other hand, attacks on sequential locking techniques suffer from scalability issues and evaluation of improperly locked designs. Finally, while most attacks provide an approximately correct key, they do not indicate which specific key bits are undetermined. This thesis proposes an oracle-guided attack that applies to both combinational and sequential locking without scan chain access. The attack applies light-weight design modifications that represent the oracle using a finite state machine and applies an assertion-based query of the unlocking key. We have analyzed the effectiveness of our attack against 46 sequential designs locked with various classes of combinational locking including random, strong, logic cone-based, and anti-SAT based. We further evaluated against a sequential locking technique using 46 designs with various key sequence lengths and widths. Finally, we expand our framework to identify undetermined key bits, enabling complementary attacks on the smaller remaining key space.
Masoud Ghazikor
Distributed Optimization and Control Algorithms for UAV Networks in Unlicensed Spectrum BandsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairVictor Frost
Prasad Kulkarni
Abstract
UAVs have emerged as a transformative technology for various applications, including emergency services, delivery, and video streaming. Among these, video streaming services in areas with limited physical infrastructure, such as disaster-affected areas, play a crucial role in public safety. UAVs can be rapidly deployed in search and rescue operations to efficiently cover large areas and provide live video feeds, enabling quick decision-making and resource allocation strategies. However, ensuring reliable and robust UAV communication in such scenarios is challenging, particularly in unlicensed spectrum bands, where interference from other nodes is a significant concern. To address this issue, developing a distributed transmission control and video streaming is essential to maintaining a high quality of service, especially for UAV networks that rely on delay-sensitive data.
In this MSc thesis, we study the problem of distributed transmission control and video streaming optimization for UAVs operating in unlicensed spectrum bands. We develop a cross-layer framework that jointly considers three inter-dependent factors: (i) in-band interference introduced by ground-aerial nodes at the physical layer, (ii) limited-size queues with delay-constrained packet arrival at the MAC layer, and (iii) video encoding rate at the application layer. This framework is designed to optimize the average throughput and PSNR by adjusting fading thresholds and video encoding rates for an integrated aerial-ground network in unlicensed spectrum bands. Using consensus-based distributed algorithm and coordinate descent optimization, we develop two algorithms: (i) Distributed Transmission Control (DTC) that dynamically adjusts fading thresholds to maximize the average throughput by mitigating trade-offs between low-SINR transmission errors and queue packet losses, and (ii) Joint Distributed Video Transmission and Encoder Control (JDVT-EC) that optimally balances packet loss probabilities and video distortions by jointly adjusting fading thresholds and video encoding rates. Through extensive numerical analysis, we demonstrate the efficacy of the proposed algorithms under various scenarios.
Ganesh Nurukurti
Customer Behavior Analytics and Recommendation System for E-CommerceWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Han Wang
Abstract
In the era of digital commerce, personalized recommendations are pivotal for enhancing user experience and boosting engagement. This project presents a comprehensive recommendation system integrated into an e-commerce web application, designed using Flask and powered by collaborative filtering via Singular Value Decomposition (SVD). The system intelligently predicts and personalizes product suggestions for users based on implicit feedback such as purchases, cart additions, and search behavior.
The foundation of the recommendation engine is built on user-item interaction data, derived from the Brazilian e-commerce Olist dataset. Ratings are simulated using weighted scores for purchases and cart additions, reflecting varying degrees of user intent. These interactions are transformed into a user-product matrix and decomposed using SVD, yielding latent user and product features. The model leverages these latent factors to predict user interest in unseen products, enabling precise and scalable recommendation generation.
To further enhance personalization, the system incorporates real-time user activity. Recent search history is stored in an SQLite database and used to prioritize recommendations that align with the user’s current interests. A diversity constraint is also applied to avoid redundancy, limiting the number of recommended products per category.
The web application supports robust user authentication, product exploration by category, cart management, and checkout simulations. It features a visually driven interface with dynamic visualizations for product insights and user interactions. The home page adapts to individual preferences, showing tailored product recommendations and enabling users to explore categories and details.
In summary, this project demonstrates the practical implementation of a hybrid recommendation strategy combining matrix factorization with contextual user behavior. It showcases the importance of latent factor modeling, data preprocessing, and user-centric design in delivering an intelligent retail experience.
Srijanya Chetikaneni
Plant Disease Prediction Using Transfer LearningWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairPrasad Kulkarni
Han Wang
Abstract
Timely detection of plant diseases is critical to safeguarding crop yields and ensuring global food security. This project presents a deep learning-based image classification system to identify plant diseases using the publicly available PlantVillage dataset. The core objective was to evaluate and compare the performance of a custom-built Convolutional Neural Network (CNN) with two widely used transfer learning models—EfficientNetB0 and MobileNetV3Small.
All models were trained on augmented image data resized to 224×224 pixels, with preprocessing tailored to each architecture. The custom CNN used simple normalization, whereas EfficientNetB0 and MobileNetV3Small utilized their respective pre-processing methods to standardize the pretrained ImageNet domain inputs. To improve robustness, the training pipeline included data augmentation, class weighting, and early stopping.
Training was conducted using the Adam optimizer and categorical cross-entropy loss over 30 epochs, with performance assessed using accuracy, loss, and training time metrics. The results revealed that transfer learning models significantly outperformed the custom CNN. EfficientNetB0 achieved the highest accuracy, making it ideal for high-precision applications, while MobileNetV3Small offered a favorable balance between speed and accuracy, making it suitable for lightweight, real-time inference on edge devices.
This study validates the effectiveness of transfer learning for plant disease detection tasks and emphasizes the importance of model-specific preprocessing and training strategies. It provides a foundation for deploying intelligent plant health monitoring systems in practical agricultural environments.
Ahmet Soyyigit
Anytime Computing Techniques for LiDAR-based Perception In Cyber-Physical SystemsWhen & Where:
Nichols Hall, Room 250 (Gemini Room)
Committee Members:
Heechul Yun, ChairMichael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri
Abstract
The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advances in artificial intelligence (AI), particularly the introduction of deep neural networks (DNN), have significantly improved the autonomy of CPS, notably by boosting perception capabilities.
CPS perception aims to discern, classify, and track objects of interest in the operational environment, a task that is considerably challenging for computers in a three-dimensional (3D) space. For this task, the use of LiDAR sensors and processing their readings with DNNs has become popular because of their excellent performance However, in CPS such as self-driving cars and drones, object detection must be not only accurate but also timely, posing a challenge due to the high computational demand of LiDAR object detection DNNs. Satisfying this demand is particularly challenging for on-board computational platforms due to size, weight, and power constraints. Therefore, a trade-off between accuracy and latency must be made to ensure that both requirements are satisfied. Importantly, the required trade-off is operational environment dependent and should be weighted more on accuracy or latency dynamically at runtime. However, LiDAR object detection DNNs cannot dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). Prior research aimed at anytime computing for object detection DNNs using camera images is not applicable to LiDAR-based detection due to architectural differences. This thesis addresses these challenges by proposing three novel techniques: Anytime-LiDAR, which enables early termination with reasonable accuracy; VALO (Versatile Anytime LiDAR Object Detection), which implements deadline-aware input data scheduling; and MURAL (Multi-Resolution Anytime Framework for LiDAR Object Detection), which introduces dynamic resolution scaling. Together, these innovations enable LiDAR-based object detection DNNs to make effective trade-offs between latency and accuracy under varying operational conditions, advancing the practical deployment of LiDAR object detection DNNs.
Rahul Purswani
Finetuning Llama on custom data for QA tasksWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
David Johnson, ChairDrew Davidson
Prasad Kulkarni
Abstract
Fine-tuning large language models (LLMs) for domain-specific use cases, such as question answering, offers valuable insights into how their performance can be tailored to specialized information needs. In this project, we focused on the University of Kansas (KU) as our target domain. We began by scraping structured and unstructured content from official KU webpages, covering a wide array of student-facing topics including campus resources, academic policies, and support services. From this content, we generated a diverse set of question-answer pairs to form a high-quality training dataset. LLaMA 3.2 was then fine-tuned on this dataset to improve its ability to answer KU-specific queries with greater relevance and accuracy. Our evaluation revealed mixed results—while the fine-tuned model outperformed the base model on most domain-specific questions, the original model still had an edge in handling ambiguous or out-of-scope prompts. These findings highlight the strengths and limitations of domain-specific fine-tuning, and provide practical takeaways for customizing LLMs for real-world QA applications.
Rithvij Pasupuleti
A Machine Learning Framework for Identifying Bioinformatics Tools and Database Names in Scientific LiteratureWhen & Where:
LEEP2, Room 2133
Committee Members:
Cuncong Zhong, ChairDongjie Wang
Han Wang
Zijun Yao
Abstract
The absence of a single, comprehensive database or repository cataloging all bioinformatics databases and software creates a significant barrier for researchers aiming to construct computational workflows. These workflows, which often integrate 10–15 specialized tools for tasks such as sequence alignment, variant calling, functional annotation, and data visualization, require researchers to explore diverse scientific literature to identify relevant resources. This process demands substantial expertise to evaluate the suitability of each tool for specific biological analyses, alongside considerable time to understand their applicability, compatibility, and implementation within a cohesive pipeline. The lack of a central, updated source leads to inefficiencies and the risk of using outdated tools, which can affect research quality and reproducibility. Consequently, there is a critical need for an automated, accurate tool to identify bioinformatics databases and software mentions directly from scientific texts, streamlining workflow development and enhancing research productivity.
The bioNerDS system, a prior effort to address this challenge, uses a rule-based named entity recognition (NER) approach, achieving an F1 score of 63% on an evaluation set of 25 articles from BMC Bioinformatics and PLoS Computational Biology. By integrating the same set of features such as context patterns, word characteristics and dictionary matches into a machine learning model, we developed an approach using an XGBoost classifier. This model, carefully tuned to address the extreme class imbalance inherent in NER tasks through synthetic oversampling and refined via systematic hyperparameter optimization to balance precision and recall, excels at capturing complex linguistic patterns and non-linear relationships, ensuring robust generalization. It achieves an F1 score of 82% on the same evaluation set, significantly surpassing the baseline. By combining rule-based precision with machine learning adaptability, this approach enhances accuracy, reduces ambiguities, and provides a robust tool for large-scale bioinformatics resource identification, facilitating efficient workflow construction. Furthermore, this methodology holds potential for extension to other technological domains, enabling similar resource identification in fields like data science, artificial intelligence, or computational engineering.
Vishnu Chowdary Madhavarapu
Automated Weather Classification Using Transfer LearningWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
David Johnson, ChairPrasad Kulkarni
Dongjie Wang
Abstract
This project presents an automated weather classification system utilizing transfer learning with pre-trained convolutional neural networks (CNNs) such as VGG19, InceptionV3, and ResNet50. Designed to classify weather conditions—sunny, cloudy, rainy, and sunrise—from images, the system addresses the challenge of limited labeled data by applying data augmentation techniques like zoom, shear, and flip, expanding the dataset images. By fine-tuning the final layers of pre-trained models, the solution achieves high accuracy while significantly reducing training time. VGG19 was selected as the baseline model for its simplicity, strong feature extraction capabilities, and widespread applicability in transfer learning scenarios. The system was trained using the Adam optimizer and evaluated on key performance metrics including accuracy, precision, recall, and F1 score. To enhance user accessibility, a Flask-based web interface was developed, allowing real-time image uploads and instant weather classification. The results demonstrate that transfer learning, combined with robust data preprocessing and fine-tuning, can produce a lightweight and accurate weather classification tool. This project contributes toward scalable, real-time weather recognition systems that can integrate into IoT applications, smart agriculture, and environmental monitoring.
RokunuzJahan Rudro
Using Machine Learning to Classify Driver Behavior from Psychological Features: An Exploratory StudyWhen & Where:
Eaton Hall, Room 1A
Committee Members:
Sumaiya Shomaji, ChairDavid Johnson
Zijun Yao
Alexandra Kondyli
Abstract
Driver inattention and human error are the primary causes of traffic crashes. However, little is known about the relationship between driver aggressiveness and safety. Although several studies that group drivers into different classes based on their driving performance have been conducted, little has been done to explore how behavioral traits are linked to driver behavior. The study aims to link different driver profiles, assessed through psychological evaluations, with their likelihood of engaging in risky driving behaviors, as measured in a driving simulation experiment. By incorporating psychological factors into machine learning algorithms, our models were able to successfully relate self-reported decision-making and personality characteristics with actual driving actions. Our results hold promise toward refining existing models of driver behavior by understanding the psychological and behavioral characteristics that influence the risk of crashes.
Md Mashfiq Rizvee
Energy Optimization in Multitask Neural Networks through Layer SharingWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
Han Wang
Abstract
Artificial Intelligence (AI) is being widely used in diverse domains such as industrial automation, traffic control, precision agriculture, and smart cities for major heavy lifting in terms of data analysis and decision making. However, the AI life- cycle is a major source of greenhouse gas (GHG) emission leading to devastating environmental impact. This is due to expensive neural architecture searches, training of countless number of models per day across the world, in-field AI processing of data in billions of edge devices, and advanced security measures across the AI life cycle. Modern applications often involve multitasking, which involves performing a variety of analyzes on the same dataset. These tasks are usually executed on resource-limited edge devices, necessitating AI models that exhibit efficiency across various measures such as power consumption, frame rate, and model size. To address these challenges, we introduce a novel neural network architecture model that incorporates a layer sharing principle to optimize the power usage. We propose a novel neural architecture, Layer Shared Neural Networks that merges multiple similar AI/NN tasks together (with shared layers) towards creating a single AI/NN model with reduced energy requirements and carbon footprint. The experimental findings reveal competitive accuracy and reduced power consumption. The layer shared model significantly reduces power consumption by 50% during training and 59.10% during inference causing as much as an 84.64% and 87.10% decrease in CO2 emissions respectively.
Fairuz Shadmani Shishir
Parameter-Efficient Computational Drug Discovery using Deep LearningWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
Hongyang Sun
Abstract
The accurate prediction of small molecule binding affinity and toxicity remains a central challenge in drug discovery, with significant implications for reducing development costs, improving candidate prioritization, and enhancing safety profiles. Traditional computational approaches, such as molecular docking and quantitative structure-activity relationship (QSAR) models, often rely on handcrafted features and require extensive domain knowledge, which can limit scalability and generalization to novel chemical scaffolds. Recent advances in language models (LMs), particularly those adapted to chemical representations such as SMILES (Simplified Molecular Input Line Entry System), have opened new ways for learning data-driven molecular representations that capture complex structural and functional properties. However, achieving both high binding affinity and low toxicity through a resource-efficient computational pipeline is inherently difficult due to the multi-objective nature of the task. This study presents a novel dual-paradigm approach to critical challenges in drug discovery: predicting small molecules with high binding affinity and low cardiotoxicity profiles. For binding affinity prediction, we implement a specialized graph neural network (GNN) architecture that operates directly on molecular structures represented as graphs, where atoms serve as nodes and bonds as edges. This topology-aware approach enables the model to capture complex spatial arrangements and electronic interactions critical for protein-ligand binding. For toxicity prediction, we leverage chemical language models (CLMs) fine-tuned with Low-Rank Adaptation (LoRA), allowing efficient adaptation of large pre-trained models to specialized toxicological endpoints while maintaining the generalized chemical knowledge embedded in the base model. Our hybrid methodology demonstrates significant improvements over existing computational approaches, with the GNN component achieving an average area under the ROC curve (AUROC) of 0.92 on three protein targets and the LoRA-adapted CLM reaching (AUROC) of 0.90 with 60% reduction in parameter usage in predicting cardiotoxicity. This work establishes a powerful computational framework that accelerates drug discovery by enabling both higher binding affinity and low toxicity compounds with optimized efficacy and safety profiles.
Soma Pal
Truths about compiler optimization for state-of-the-art (SOTA) C/C++ compilersWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Prasad Kulkarni, ChairEsam El-Araby
Drew Davidson
Tamzidul Hoque
Jiang Yunfeng
Abstract
Compiler optimizations are critical for performance and have been extensively studied, especially for C/C++ language compilers. Our overall goal in this thesis is to investigate and compare the properties and behavior of optimization passes across multiple contemporary, state-of-the-art (SOTA) C/C++ compilers to understand if they adopt similar optimization implementation and orchestration strategies. Given the maturity of pre-existing knowledge in the field, it seems conceivable that different compiler teams will adopt consistent optimization passes, pipeline and application techniques. However, our preliminary results indicate that such expectation may be misguided. If so, then we will attempt to understand the differences, and study and quantify their impact on the performance of generated code.
In our first work, we study and compare the behavior of profile-guided optimizations (PGO) in two popular SOTA C/C++ compilers, GCC and Clang. This study reveals many interesting, and several counter-intuitive, properties about PGOs in C/C++ compilers. The behavior and benefits of PGOs also vary significantly across our selected compilers. We present our observations, along with plans to further explore these inconsistencies in this report. Likewise, we have also measured noticeable differences in the performance delivered by optimizations across our compilers. We propose to explore and understand these differences in this work. We present further details regarding our proposed directions and planned experiments in this report. We hope that this work will show and suggest opportunities for compilers to learn from each other and motivate researchers to find mechanisms to combine the benefits of multiple compilers to deliver higher overall program performance.
Nyamtulla Shaik
AI Vision to Care: A QuadView of Deep Learning for Detecting Harmful Stimming in AutismWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairBo Luo
Dongjie Wang
Abstract
Stimming refers to repetitive actions or behaviors used to regulate sensory input or express feelings. Children with developmental disorders like autism (ASD) frequently perform stimming. This includes arm flapping, head banging, finger flicking, spinning, etc. This is exhibited by 80-90% of children with Autism, which is seen in 1 among 36 children in the US. Head banging is one of these self-stimulatory habits that can be harmful. If these behaviors are automatically identified and notified using live video monitoring, parents and other caregivers can better watch over and assist children with ASD.
Classifying these actions is important to recognize harmful stimming, so this study focuses on developing a deep learning-based approach for stimming action recognition. We implemented and evaluated four models leveraging three deep learning architectures based on Convolutional Neural Networks (CNNs), Autoencoders, and Vision Transformers. For the first time in this area, we use skeletal joints extracted from video sequences. Previous works relied solely on raw RGB videos, vulnerable to lighting and environmental changes. This research explores Deep Learning based skeletal action recognition and data processing techniques for a small unstructured dataset that consists of 89 home recorded videos collected from publicly available sources like YouTube. Our robust data cleaning and pre-processing techniques helped the integration of skeletal data in stimming action recognition, which performed better than state-of-the-art with a classification accuracy of up to 87%
In addition to using traditional deep learning models like CNNs for action recognition, this study is among the first to apply data-hungry models like Vision Transformers (ViTs) and Autoencoders for stimming action recognition on the dataset. The results prove that using skeletal data reduces the processing time and significantly improves action recognition, promising a real-time approach for video monitoring applications. This research advances the development of automated systems that can assist caregivers in more efficiently tracking stimming activities.
Alexander Rodolfo Lara
Creating a Faradaic Efficiency Graph Dataset Using Machine LearningWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Zijun Yao, ChairSumaiya Shomaji
Kevin Leonard
Abstract
Just as the internet-of-things leverages machine learning over a vast amount of data produced by an innumerable number of sensors, the Internet of Catalysis program uses similar strategies with catalysis research. One application of the Internet of Catalysis strategy is treating research papers as datapoints, rich with text, figures, and tables. Prior research within the program focused on machine learning models applied strictly over text.
This project is the first step of the program in creating a machine learning model from the images of catalysis research papers. Specifically, this project creates a dataset of faradaic efficiency graphs using transfer learning from pretrained models. The project utilizes FasterRCNN_ResNet50_FPN, LayoutLMv3SequenceClassification, and computer vision techniques to recognize figures, extract all graphs, then classify the faradaic efficiency graphs.
Downstream of this project, researchers will create a graph reading model to integrate with large language models. This could potentially lead to a multimodal model capable of fully learning from images, tables, and texts of catalysis research papers. Such a model could then guide experimentation on reaction conditions, catalysts, and production.
Amin Shojaei
Scalable and Cooperative Multi-Agent Reinforcement Learning for Networked Cyber-Physical Systems: Applications in Smart GridsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairAlex Bardas
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri
Abstract
Significant advances in information and networking technologies have transformed Cyber-Physical Systems (CPS) into networked cyber-physical systems (NCPS). A noteworthy example of such systems is smart grid networks, which include distributed energy resources (DERs), renewable generation, and the widespread adoption of Electric Vehicles (EVs). Such complex NCPS require intelligent and autonomous control solutions. For example, the increasing number of EVs introduces significant sources of demand and user behavior uncertainty that can jeopardize grid stability during peak hours. Traditional model-based demand-supply controls fail to accurately model and capture the complex nature of smart grid systems in the presence of different uncertainties and as the system size grows. To address these challenges, data-driven approaches have emerged as an effective solution for informed decision-making, predictive modeling, and adaptive control to enhance the resiliency of NCPS in uncertain environments.
As a powerful data-driven approach, Multi-Agent Reinforcement Learning (MARL) enables agents to learn and adapt in dynamic and uncertain environments. However, MARL techniques introduce complexities related to communication, coordination, and synchronization among agents. In this PhD research, we investigate autonomous control for smart grid decision networks using MARL. First, we examine the issue of imperfect state information, which frequently arises due to the inherent uncertainties and limitations in observing the system state.
Second, we focus on the cooperative behavior of agents in distributed MARL frameworks, particularly under the central training with decentralized execution (CTDE) paradigm. We provide theoretical results and variance analysis for stochastic and deterministic cooperative MARL algorithms, including Multi-Agent Deep Deterministic Policy Gradient (MADDPG), Multi-Agent Proximal Policy Optimization (MAPPO), and Dueling MAPPO. These analyses highlight how coordinated learning can improve system-wide decision-making in uncertain and dynamic environments like EV networks.
Third, we address the scalability challenge in large-scale NCPS by introducing a hierarchical MARL framework based on a cluster-based architecture. This framework organizes agents into coordinated subgroups, improving scalability while preserving local coordination. We conduct a detailed variance analysis of this approach to demonstrate its effectiveness in reducing communication overhead and learning complexity. This analysis establishes a theoretical foundation for scalable and efficient control in large-scale smart grid applications.
Asrith Gudivada
Custom CNN for Object State Classification in Robotic CookingWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
David Johnson, ChairPrasad Kulkarni
Dongjie Wang
Abstract
This project presents the development of a custom Convolutional Neural Network (CNN) designed to classify object states—such as sliced, diced, or peeled—in cooking environments. Recognizing fine-grained object states is essential for context-aware manipulation but remains challenging due to visual similarity between states and a limited dataset. To address these challenges, I built a lightweight CNN from scratch, deliberately avoiding pretrained models to maintain domain specificity and efficiency. The model was enhanced through data augmentation and optimized dropout layers, with additional experiments incorporating batch normalization, Inception modules, and residual connections. While these advanced techniques offered incremental improvements during experimentation, the final model—a combination of data augmentation, dropout, and batch normalization—achieved ~60% validation accuracy and demonstrated stable generalization. This work highlights the trade-offs between model complexity and performance in constrained environments and contributes toward real-time state recognition with potential applications in assistive technologies.
Past Defense Notices
PEGAH NOKHIZ
Understanding User Behavior in Social Networks Using Quantified Moral FoundationsWhen & Where:
246 Nichols Hall
Committee Members:
Fengjun Li, ChairBo Luo
Cuncong Zhong
Abstract
Moral inclinations expressed in user-generated content such as online reviews or tweets can provide useful insights to understand users’ behavior and activities in social networks, for example, to predict users’ rating behavior, perform customer feedback mining, and study users' tendency to spread abusive content on these social platforms. In this work, we want to answer two important research questions. First, if the moral attributes of social network data can provide additional useful information about users' behavior and how to utilize this information to enhance our understanding. To answer this question, we used the Moral FoundationsTheory and Doc2Vec, a Natural Language Processing technique, to compute the quantified moral loadings of user-generated textual contents in social networks. We used conditional relative frequency and the correlations between the moral foundations as two measures to study the moral break down of the social network data, utilizing a dataset of Yelp reviews and a dataset of tweets on abusive user-generated content. Our findings indicated that these moral features are tightly bound with users' behavior in social networks. The second question we want to answer is if we can use the quantified moral loadings as new boosting features to improve the differentiation, classification, and prediction of social network activities. To test our hypothesis, we adopted our new moral features in a multi-class classification approach to distinguish hateful and offensive tweets in a labeled dataset, and compared with the baseline approach that only uses conventional text mining features such as tf-idf features, Part of Speech (PoS) tags, etc. Our findings demonstrated that the moral features improved the performance of the baseline approach in terms of precision, recall, and F-measure.
MUSTAFA AL-QADI
Laser Phase Noise and Performance of High-Speed Optical Communication SystemsWhen & Where:
2001B Eaton Hall
Committee Members:
Ron Hui, ChairChris Allen
Victor Frost
Erik Perrins
Jie Han*
Abstract
The non-ending growth of data traffic resulting from the continuing emergence of high-data-rate-demanding applications sets huge capacity requirements on optical interconnects and transport networks. This requires optical communication schemes in these networks to make the best possible use of the available optical spectrum per a single optical channel to enable transmission of multiple tens of tera-bits per second per a single fiber core in high capacity transport networks. Therefore, advanced modulation formats are required to be used in conjunction with energy-efficient and robust transceiver schemes. Important challenges facing these goals are the stringent requirements on the characteristics of optical components comprising these systems. Especially the laser sources. Laser phase noise is one of the most important performance-limiting factors in systems with high spectral efficiency. In this research work, we study the effects of different laser phase noise characteristics on the performance of different optical communication schemes. A novel, simple and accurate phase noise characterization technique is proposed. Experimental results show that the proposed technique is very accurate in estimating the performance of lasers in coherent systems employing digital phase recovery techniques. A novel multi-heterodyne scheme for characterizing the phase noise of laser frequency comb sources is also proposed and validated by experimental results. This proposed scheme is the first one of its type capable of measuring the differential phase noise between multiple spectral lines instantaneously by a single measurement. Moreover, extended relations between system performance and detailed characteristics of laser phase noise are also analyzed and modeled. The results of this study show that the commonly-used metric to estimate the performance of lasers with a specific phase recovery scheme, linewidth-symbol-period product, is not necessarily accurate for all types of lasers, and description of FM-noise power spectral profile is required for accurate performance estimation. We also propose an energy- and cost-efficient transmission scheme suitable for metro and long-reach data-center-interconnect links based on direct detection of field-modulated optical signals with advanced modulation formats, allowing for higher spectral efficiency. The proposed system combines the Kramers-Kronig coherent receiver technique, with the use of quantum-dot multi-mode laser sources, to generate and transmit multi-channel optical signals using a single diode laser source. Experimental results of the proposed system show that high modulation formats can be employed, with high robustness against laser phase noise and frequency drifting.
MARK GREBE
Domain Specific Languages for Small Embedded SystemsWhen & Where:
250 Nichols Hall
Committee Members:
Andy Gill, ChairPerry Alexander
Prasad Kulkarni
Suzanne Shontz
Kyle Camarda
Abstract
Resource limited embedded systems provide a great challenge to programming using functional languages. Although these embedded systems cannot be programmed directly with Haskell, I show that an embedded domain specific language is able to be used to program them, and provides a user friendly environment for both prototyping and full development. The Arduino line of microcontroller boards provide a versatile, low cost and popular platform for development of these resource limited systems, and I use these boards as the platform for my DSL research.
First, I provide a shallowly embedded domain specific language, and a firmware interpreter, allowing the user to program the Arduino while tethered to a host computer. Shallow EDSLs allow a programmer to program using many of the features of a host language and its syntax, but sacrifice performance. Next, I add a deeply embedded version, allowing the interpreter to run standalone from the host computer, as well as allowing the code to be compiled to C and then machine code for efficient operation. Deep EDSLs provide better performance and flexibility, through the ability to manipulate the abstract syntax tree of the DSL program, but sacrifice syntactical similarity to the host language. Using Haskino, my EDSL designed for Arduino microcontrollers, and a compiler plugin for the Haskell GHC compiler, I show a method for combining the best aspects of shallow and deep EDSLs. The programmer is able to write in the shallow EDSL, and have it automatically transformed into the deep EDSL. This allows the EDSL user to benefit from powerful aspects of the host language, Haskell, while meeting the demanding resource constraints of the small embedded processing environment.
ALI ABUSHAIBA
Extremum Seeking Maximum Power Point Tracking for a Stand-Alone and Grid-Connected Photovoltaic SystemsWhen & Where:
Room 1 Eaton Hall
Committee Members:
Reza Ahmadi, ChairKen Demarest
Glenn Prescott
Alessandro Salandrino
Prajna Dhar*
Abstract
Energy harvesting from solar sources in an attempt to increase efficiency has sparked interest in many communities to develop more energy harvesting applications for renewable energy topics. Advanced technical methods are required to ensure the maximum available power is harnessed from the photovoltaic (PV) system. This dissertation proposed a new discrete-in-time extremum-seeking (ES) based technique for tracking the maximum power point of a photovoltaic array. The proposed method is a true maximum power point tracker that can be implemented with reasonable processing effort on an expensive digital controller. The dissertation presents a stability analysis of the proposed method to guarantee the convergence of the algorithm.
Two types of PV systems were designed and comprehensive frame work of control design was considered for a stand-alone and a three-phase grid connected system.
Grid-tied systems commonly have a two-stage power electronics interface which is necessitated due to the inherent limitation of the DC-AC (Inverter) power converging stage. However, a one stage converter topology, denoted as Quasi-Z-source inverter (q-ZSI) was selected that interface the PV panel which overcomes the inverter limitations to harvest the maximum available power.
A powerful control scheme called Model Predictive Control with Finite Set (MPC-FS) was designed to control the grid connected system. The predictive control was selected to achieve a robust controller with superior dynamic response in conjunction with the extremum-seeking algorithm to enhance the system behavior.
The proposed method exhibited better performance in comparison to conventional Maximum Power Point Tracking (MPPT) methods and require less computational effort than the complex mathematical methods.
JUSTIN DAWSON
The Remote MonadWhen & Where:
246 Nichols Hall
Committee Members:
Andy Gill, ChairPerry Alexander
Prasad Kulkarni
Bo Luo
Kyle Camarda
Abstract
Remote Procedure Calls are an integral part of the internet of things and cloud computing. However, remote procedures, by their very nature, have an expensive overhead cost of a network round trip. There have been many optimizations to amortize the network overhead cost, including asynchronous remote calls and batching requests together.
In this dissertation, we present a principled way to batch procedure calls together, called the Remote Monad. The support for monadic structures in languages such as Haskell can be utilized to build a staging mechanism for chains of remote procedures. Our specific formulation of remote monads uses natural transformations to make modular and composable network stacks which can automatically bundle requests into packets by breaking up monadic actions into ideal packets. By observing the properties of these primitive operations, we can leverage a number of tactics to maximize the size of the packets.
We have created a framework which has been successfully used to implement the industry standard JSON-RPC protocol, a graphical browser-based library, an efficient byte string implementation, a library to communicate with an Arduino board and database queries all of which have automatic bundling enabled. We demonstrate that the result of this investigation is that the cost of implementing bundling for remote monads can be amortized almost for free, when given a user-supplied packet transportation mechanism.
JOSEPH St AMAND
Learning to Measure: Distance Metric Learning with Structured SparsityWhen & Where:
246 Nichols Hall
Committee Members:
Arvin Agah, ChairPrasad Kulkarni
Jim Miller
Richard Wang
Bozenna Pasik-Duncan*
Abstract
Many important machine learning and data mining algorithms rely on a measure to provide a notion of distance or dissimilarity. Naive metrics such as the Euclidean distance are incapable of leveraging task-specific information, and consider all features as equal. A learned distance metric can become much more effective by honing in on structure specific to a task. Additionally, it is often extremely desirable for a metric to be sparse, as this vastly increases the ability to interpret the distance metric. In this dissertation, we explore several current problems in distance metric learning and put forth solutions which make use of structured sparsity.
The first contribution of this dissertation begins with a classic approach in distance metric learning and address a scenario where distance metric learning is typically inapplicable, i.e., the case of learning on heterogeneous data in a high-dimensional input space. We construct a projection-free distance metric learning algorithm which utilizes structured sparse updates and successfully demonstrate its application to learn a metric with over a billion parameters.
The second contribution of this dissertation focuses on an intriguing regression-based approach to distance metric learning. Under this regression approach there are two sets of parameters to learn; those which parameterize the metric, and those defining the so-called ``virtual points''. We begin with an exploration of the metric parameterization and develop a structured sparse approach to robustify the metric to noisy, corrupted, or irrelevant data. We then focus on the virtual points and develop a new method for learning the metric and constraints together in a simultaneous manner. It is demonstrate through empirical means that our approach results in a distance metric which is more effective than the current state of-the-art.
Machine learning algorithms have recently become ingrained in an incredibly diverse amount of technology. The focus of this dissertation is to develop more effective techniques to learn a distance metric. We believe that this work has the potential for broad-reaching impacts, as learning a more effective metric typically results in more accurate metric-based machine learning algorithms.
SHIVA RAMA VELMA
An Implementation of the LEM2 Algorithm Handling Numerical AttributesWhen & Where:
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse , ChairPerry Alexander
Prasad Kulkarni
Abstract
Data mining is a computing process of finding meaningful patterns in large sets of data. These patterns are then analyzed and used to make predictions for the future. One form of data mining is to extract rules from data sets. There are various rule induction algorithms, such as LEM1 (Learning from Examples Module Version 1), LEM2 (Learning from Examples Module Version 2) and MLEM2(Modified Learning from Examples Module Version 2). Most of the rule induction algorithms require the input data with only discretized attributes. If the input data contains numerical attributes, we need to convert them into discrete values (intervals) before performing rule induction, this process is called discretization. In this project, we discuss an implementation of LEM2 which generates the rules from data with numerical and symbolic attributes. The accuracy of the rules generated by LEM2 is measured by computing the error rate by a program called rule checker using ten-fold cross-validation and holdout methods.
SURYA NIMMAKAYALA
Heuristics to Predict and Eagerly Translate Code in DBTsWhen & Where:
250 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Fengjun Li
Bo Luo
Shawn Keshmiri*
Abstract
Dynamic Binary Translators(DBTs) have a variety of uses, like instrumentation, profiling, security, portability, etc. In order for the desired application to run with these enhanced additional features(not originally part of its design), it is to be run under the control of Dynamic Binary Translator. The application can be thought of as the guest application, to be run with in a controlled environment of the translator, which would be the host application. That way, the intended application execution flow can be enforced by the translator, thereby inducing the desired behavior in the application on the host platform(combination of Operating System and Hardware). Depending on the implementation of the translator(host application), the guest application can either have code compiled for the host platform, or a different platform. It would be the responsibility of the translator to make appropriate code/binary translation of the guest application code, to be run on the host platform.
However, there will be a run-time/execution-time overhead in the translator, when performing the additional tasks to run the guest application in a controlled fashion. This run-time overhead has been limiting the usage of DBT's on a large scale, where response times can be critical. There is often a trade-off between the benefits of using a DBT against the overall application response time. So, there is a need to research/explore ways of faster application execution through DBT's(given their large code-base).
With the evolution of the multi-core and GPU hardware architectures, paralleization of software can be employed through multiple threads, which can concurrently run parts of code and potentially doing more work at the same time. The proper design of parallel applications or parallelizing parts of existing code, can lead to faster application run-time's, by taking advantage of the hardware architecture support to parallel programs.
We explore the possibility of improving the performance of a DBT named DynamoRIO. The basic idea is to improve its performance by speeding-up the process of guest code translation, through multiple threads translating multiple pieces of code concurrently. In an ideal case, all the required code blocks for application execution would be available ahead of time(eager translation), without any wait/overhead at run-time, and also giving it the enhanced features through the DBT. For efficient run-time eager translation there is also a need for heuristics, to better predict the next likely code block to be executed. That could potentially bring down the less productive code translations at run-time. The goal is to get application speed-up through eager translation, coupled with block prediction heuristics, leading to an execution time close to that of native run.
PATRICK McCORMICK
Design and Optimization of Physical Waveform-Diverse EmissionsWhen & Where:
246 Nichols Hall
Committee Members:
Shannon Blunt, ChairChris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*
Abstract
With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.
In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.
The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform.
RAKESH YELLA
A Comparison of Two Decision Tree Generating Algorithms CART and Modified ID3When & Where:
2001B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, ChairMan Kong
Prasad Kulkarni
Abstract
In Data mining, Decision Tree is a type of classification model which uses a tree-like data structure to organize the data to obtain meaningful information. We may use Decision Tree for important predictive analysis in data mining.
In this project, we compare two decision tree generating algorithms CART and the modified ID3 algorithm using different datasets with discrete and continuous numerical values. A new approach to handle the continuous numerical values is implemented in this project since the basic ID3 algorithm is inefficient in handling the continuous numerical values. In the modified ID3 algorithm, we discretize the continuous numerical values by creating cut-points. The decision trees generated by the modified algorithm contain fewer nodes and branches compared to basic ID3.
The results from the experiments indicate that there is statistically insignificant difference between CART and modified ID3 in terms of accuracy on test data. On the other hand, the size of the decision tree generated by CART is smaller than the decision tree generated by modified ID3.