Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

No upcoming defense notices for now!

Past Defense Notices

Dates

Koyel Pramanick

Detect Evidence of Compiler Triggered Security Measures in Binary Code

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

The primary goal of this thesis is to develop and explore techniques to identify security measures added by compilers in software binaries. These measures, added automatically during the build process, include runtime security checks like stack canaries, AddressSanitizer (ASan), and Control Flow Integrity (CFI), which help protect against memory errors, buffer overflows, and control flow attacks. This work also investigates how unresolved compiler warnings, especially those related to security, can be identified in binaries when the source code is unavailable. By studying the patterns and markers left by these compiler features, this thesis provides methods to analyze and verify the security provisions embedded in software binaries. These efforts aim to bridge the gap between compile-time diagnostics and binary-level analysis, offering a way to better understand the security protections applied during software compilation. Ultimately, this work seeks to make software more transparent and give users the tools to independently assess the security measures present in compiled software, fostering greater trust and accountability in software systems.


Srinitha Kale

AUTOMATING SYMBOL RECOGNITION IN SPOT IT: ADVANCING AI-POWERED DETECTION

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Esam El-Araby
Prasad Kulkarni


Abstract

The "Spot It!" game, featuring 55 cards each with 8 unique symbols, presents a complex challenge of identifying a single matching symbol between any two cards. Addressing this challenge, machine learning has been employed to automate symbol recognition, enhancing gameplay and extending applications into areas like pattern recognition and visual search. Due to the scarcity of available datasets, a comprehensive collection of 57 distinct Spot It symbols was created, with each class consisting of 1,800 augmented images. These images were manipulated through techniques such as scaling, rotation, and resizing to represent various visual scenarios. Then developed a convolutional neural network (CNN) with five convolutional layers, batch normalization, and dropout layers, and employed the Adam optimizer to train model to accurately recognize these symbols. The robust dataset included over 102,600 images, each subject to extensive augmentation to improve the model's ability to generalize across different orientation and scaling conditions. 

The model was evaluated using 55 scanned "Spot It!" cards, where symbols were extracted and preprocessed for prediction. It achieved high accuracy in symbol identification, demonstrating significant resilience to common challenges such as rotations and scaling. This project illustrates the effective integration of data augmentation, deep learning, and computer vision techniques in tackling complex pattern recognition tasks, proving that artificial intelligence can significantly enhance traditional gaming experiences and create new opportunities in various fields. This project delves into the design, implementation, and testing of the CNN, providing a detailed analysis of its performance and highlighting its potential as a transformative tool in image recognition and categorization.


Sudha Chandrika Yadlapalli

BERT-Driven Sentiment Analysis: Automated Course Feedback Classification and Ratings

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Automating the analysis of unstructured textual data, such as student course feedback, is crucial for gaining actionable insights. This project focuses on developing a sentiment analysis system leveraging the DeBERTa-v3-base model, a variant of BERT (Bidirectional Encoder Representations from Transformers), to classify feedback sentiments and generate corresponding ratings on a 1-to-5 scale.

A dataset of 100,000+ student reviews was preprocessed and fine-tuned on the model to handle class imbalances and capture contextual nuances. Training was conducted on high-performance A100 GPUs, which enhanced computational efficiency and reduced training times significantly. The trained BERT sentiment model demonstrated superior performance compared to traditional machine learning models, achieving ~82% accuracy in sentiment classification.

The model was seamlessly integrated into a functional web application, providing a streamlined approach to evaluate and visualize course reviews dynamically. Key features include a course ratings dashboard, allowing students to view aggregated ratings for each course, and a review submission functionality where new feedback is analyzed for sentiment in real-time. For the department, an admin page provides secure access to detailed analytics, such as the distribution of positive and negative reviews, visualized trends, and the access to view individual course reviews with their corresponding sentiment scores.

This project includes a comprehensive pipeline, starting from data preprocessing and model training to deploying an end-to-end application. Traditional machine learning models, such as Logistic Regression and Decision Tree, were initially tested but yielded suboptimal results. The adoption of BERT, trained on a large dataset of 100k reviews, significantly improved performance, showcasing the benefits of advanced transformer-based models for sentiment analysis tasks.


Shriraj K. Vaidya

Exploring DL Compiler Optimizations with TVM

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Dongjie Wang
Zijun Yao


Abstract

Deep Learning (DL) compilers, also called Machine Learning (ML) compilers, take a computational graph representation of a ML model as input and apply graph-level and operator-level optimizations to generate optimized machine-code for different supported hardware architectures. DL compilers can apply several graph-level optimizations, including operator fusion, constant folding, and data layout transformations to convert the input computation graph into a functionally equivalent and optimized variant. The DL compilers also perform kernel scheduling, which is the task of finding the most efficient implementation for the operators in the computational graph. While many research efforts have focused on exploring different kernel scheduling techniques and algorithms, the benefits of individual computation graph-level optimizations are not as well studied. In this work, we employ the TVM compiler to perform a comprehensive study of the impact of different graph-level optimizations on the performance of DL models on CPUs and GPUs. We find that TVM's graph optimizations can improve model performance by up to 41.73% on CPUs and 41.6% on GPUs, and by 16.75% and 21.89%, on average, on CPUs and GPUs, respectively, on our custom benchmark suite.


Rizwan Khan

Fatigue crack segmentation of steel bridges using deep learning models - a comparative study.

When & Where:


Learned Hall, Room 3131

Committee Members:

David Johnson, Chair
Hongyang Sun



Abstract

Structural health monitoring (SHM) is crucial for maintaining the safety and durability of infrastructure. To address the limitations of traditional inspection methods, this study leverages cutting-edge deep learning-based segmentation models for autonomous crack identification. Specifically, we utilized the recently launched YOLOv11 model, alongside the established DeepLabv3+ model for crack segmentation. Mask R-CNN, a widely recognized model in crack segmentation studies, is used as the baseline approach for comparison. Our approach integrates the CREC cropping strategy to optimize dataset preparation and employs post-processing techniques, such as dilation and erosion, to refine segmentation results. Experimental results demonstrate that our method—combining state-of-the-art models, innovative data preparation strategies, and targeted post-processing—achieves superior mean Intersection-over-Union (mIoU) performance compared to the baseline, showcasing its potential for precise and efficient crack detection in SHM systems


Zhaohui Wang

Enhancing Security and Privacy of IoT Systems: Uncovering and Resolving Cross-App Threats

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to our daily lives, enabling users to customize automation rules and develop IoT apps to meet their specific needs. However, as IoT devices interact with multiple apps across various platforms, users are exposed to complex security and privacy risks. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats.

In this work, we introduce two innovative approaches to uncover and address these concealed threats in IoT environments. The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app chains that are formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate the risks based on inferences. Additionally, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks. To systematically detect cross-app interference threats, we propose to apply principles of logical fallacies to formalize conflicts in rule interactions. We identify and categorize cross-app interference by examining relations between events in IoT apps. We define new risk metrics for evaluating the severity of these interferences and use optimization techniques to resolve interference threats efficiently. This approach ensures comprehensive coverage of cross-app interference, offering a systematic solution compared to the ad hoc methods used in previous research.

To enhance forensic capabilities within IoT, we integrate blockchain technology to create a secure, immutable framework for digital forensics. This framework enables the identification, tracing, storage, and analysis of forensic information to detect anomalous behavior. Furthermore, we developed a large-scale, manually verified, comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. Each of these approaches has been evaluated using our dataset of real-world apps, collectively offering valuable insights and tools for enhancing IoT security and privacy against cross-app threats.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our algorithm using the multidimensional Poisson equation as a case study. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. We will also focus on extending these techniques to PDEs relevant to computational fluid dynamics and financial modeling, further bridging the gap between theoretical quantum algorithms and practical applications.


Hao Xuan

A Unified Algorithmic Framework for Biological Sequence Alignment

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Sequence alignment is pivotal in both homology searches and the mapping of reads from next-generation sequencing (NGS) and third-generation sequencing (TGS) technologies. Currently, the majority of sequence alignment algorithms utilize the “seed-and-extend” paradigm, designed to filter out unrelated or nonhomologous sequences when no highly similar subregions are detected. A well-known implementation of this paradigm is BLAST, one of the most widely used multipurpose aligners. Over time, this paradigm has been optimized in various ways to suit different alignment tasks. However, while these specialized aligners often deliver high performance and efficiency, they are typically restricted to one or few alignment applications. To the best of our knowledge, no existing aligner can perform all alignment tasks while maintaining superior performance and efficiency.

In this work, we introduce a unified sequence alignment framework to address this limitation. Our alignment framework is built on the seed-and-extend paradigm but incorporates novel designs in its seeding and indexing components to maximize both flexibility and efficiency. The resulting software, the Versatile Alignment Toolkit (VAT), allows the users to switch seamlessly between nearly all major alignment tasks through command-line parameter configuration. VAT was rigorously benchmarked against leading aligners for DNA and protein homolog searches, NGS and TGS read mapping, and whole-genome alignment. The results demonstrated VAT’s top-tier performance across all benchmarks, underscoring the feasibility of using a unified algorithmic framework to handle diverse alignment tasks. VAT can simplify and standardize bioinformatic analysis workflows that involve multiple alignment tasks. 


Venkata Sai Krishna Chaitanya Addepalli

A Comprehensive Approach to Facial Emotion Recognition: Integrating Established Techniques with a Tailored Model

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Facial emotion recognition has become a pivotal application of machine learning, enabling advancements in human-computer interaction, behavioral analysis, and mental health monitoring. Despite its potential, challenges such as data imbalance, variation in expressions, and noisy datasets often hinder accurate prediction.

 This project presents a novel approach to facial emotion recognition by integrating established techniques like data augmentation and regularization with a tailored convolutional neural network (CNN) architecture. Using the FER2013 dataset, the study explores the impact of incremental architectural improvements, optimized hyperparameters, and dropout layers to enhance model performance.

 The proposed model effectively addresses issues related to data imbalance and overfitting while achieving enhanced accuracy and precision in emotion classification. The study underscores the importance of feature extraction through convolutional layers and optimized fully connected networks for efficient emotion recognition. The results demonstrate improvements in generalization, setting a foundation for future real-time applications in diverse fields. 


Tejarsha Arigila

Benchmarking Aggregation Free Federated Learning using Data Condensation: Comparison with Federated Averaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Bo Luo
Sumaiya Shomaji


Abstract

This project investigates the performance of Federated Learning Aggregation-Free (FedAF) compared to traditional federated learning methods under non-independent and identically distributed (non-IID) data conditions, characterized by Dirichlet distribution parameters (alpha = 0.02, 0.05, 0.1). Utilizing the MNIST and CIFAR-10 datasets, the study benchmarks FedAF against Federated Averaging (FedAVG) in terms of accuracy, convergence speed, communication efficiency, and robustness to label and feature skews.  

Traditional federated learning approaches like FedAVG aggregate locally trained models at a central server to form a global model. However, these methods often encounter challenges such as client drift in heterogeneous data environments, which can adversely affect model accuracy and convergence rates. FedAF introduces an innovative aggregation-free strategy wherein clients collaboratively generate a compact set of condensed synthetic data. This data, augmented by soft labels from the clients, is transmitted to the server, which then uses it to train the global model. This approach effectively reduces client drift and enhances resilience to data heterogeneity. Additionally, by compressing the representation of real data into condensed synthetic data, FedAF improves privacy by minimizing the transfer of raw data.  

The experimental results indicate that while FedAF converges faster, it struggles to stabilize under highly heterogenous environments due to limited real data representation capacity of condensed synthetic data.