Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Liangqin Ren

Understanding and Mitigating Security Risks towards Trustworthy Deep Learning Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Bo Luo
Zijun Yao
Xinmai Yang

Abstract

Deep learning is widely used in healthcare, finance, and other critical domains, raising concerns about system trustworthiness. However, deep learning models and data still face three types of critical attacks: model theft, identity impersonation, and abuse of AI-generated content (AIGC). To address model theft, homomorphic encryption has been explored for privacy-preserving inference, but it remains highly inefficient. To counter identity impersonation, prior work focuses on detection, disruption, and tracing—yet fails to protect source and target images simultaneously. To prevent AIGC abuse, methods like evaluation, watermarking, and machine unlearning exist, but text-driven image editing remains largely unprotected.

This report addresses the above challenges through three key designs. First, to enable privacy-preserving inference while accelerating homomorphic encryption, we propose PrivDNN, which selectively encrypts the most critical model parameters, significantly reducing encrypted operations. We design a selection score to evaluate neuron importance and use a greedy algorithm to iteratively secure the most impactful neurons. Across four models and datasets, PrivDNN reduces encrypted operations by 85%–98%, and cuts inference time and memory usage by over 97% while preserving accuracy and privacy. Second, to counter identity impersonation in deepfake face-swapping, where both the source and target can be exploited, we introduce PhantomSeal, which embeds invisible perturbations to encode a hidden “cloak” identity. When used as a target, the resulting content displays visible artifacts; when used as a source, the generated deepfake is altered to resemble the cloak identity. Evaluations across two generations of deepfake face-swapping show that PhantomSeal reduces attack success from 97% to 0.8%, with 95% of outputs recognized as the cloak identity, providing robust protection against manipulation. Third, to prevent AIGC abuse, we construct a comprehensive dataset, perform large-scale human evaluation, and establish a benchmark for detecting AI-generated artwork to better understand abuse risks in AI-generated content. Building on this direction, we propose Protecting Copyright against Image Editing (PCIE) to address copyright infringement in text-driven image editing. PCIE embeds an invisible copyright mark into the original image, which transforms into a visible watermark after text-driven editing to automatically reveal ownership upon unauthorized modification.


Andrew Stratmann

Efficient Index-Based Multi-User Scheduling for Mobile mmWave Networks: Balancing Channel Quality and User Experience

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Morteza Hashemi, Chair
Prasad Kulkarni
Erik Perrins


Abstract

Millimeter Wave (mmWave) communication technologies have the potential to establish high data rates for next-generation wireless networks, as well as enable novel applications that were previously untenable due to high throughput requirements.  Yet reliable and efficient mmWave communication remains challenged by intermittent link quality due to user mobility and frequent line-of-sight (LoS) blockage, thereby making the links unavailable or more costly to use.  These factors are further exacerbated in multi-user settings where beam alignment overhead, limited RF chains, and heterogeneous user requirements must be balanced.  In this work, we present a hybrid multi-user scheduling solution that jointly accounts for mobility-and blockage-induced unavailability to enhance user experience in mmWave video streaming applications.  Our approach integrates two key components: (i) a blockage-aware scheduling strategy modeled via a Restless Multi-Armed Bandit (RMAB) formulation and prioritized using Whittle Indexing, and (ii) a mobility-aware geometric model that estimates beam alignment overhead cost as a function of receiver motion.  We develop a comprehensive and efficient index-based scheduler that fuses these models and leverages contextual information, such as receiver distance, mobility history, and queue state, to schedule multiple users in order to maximize throughput. Simulation results demonstrate that our approach reduces system queue backlog and improves fairness compared to round-robin and traditional index-based baselines.


Tianxiao Zhang

Efficient and Effective Object Detection and Recognition: from Convolutions to Transformers

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Bo Luo, Chair
Prasad Kulkarni
Fengjun Li
Cuncong Zhong
Guanghui Wang

Abstract

With the development of Convolutional Neural Networks (CNNs), computer vision has entered a new era, significantly enhancing the performance of tasks such as image classification, object detection, segmentation, and recognition. Furthermore, the introduction of Transformer architectures has brought the attention mechanism and a global perspective to computer vision, advancing the field to a new level. The inductive bias inherent in CNNs makes convolutional models particularly well-suited for processing images and videos. On the other hand, the attention mechanism in Transformer models allows them to capture global relationships between tokens. While Transformers often require more data and longer training periods compared to their convolutional counterparts, they have the potential to achieve comparable or even superior performance when the constraints of data availability and training time are mitigated.

In this work, we propose more efficient and effective CNNs and Transformers to increase the performance of object detection and recognition. (1) A novel approach is proposed for real-time detection and tracking of small golf balls by combining object detection with the Kalman filter. Several classical object detection models were implemented and compared in terms of detection precision and speed. (2) To address the domain shift problem in object detection, we employ generative adversarial networks (GANs) to generate images from different domains. The original RGB images are concatenated with the corresponding GAN-generated images to form a 6-channel representation, improving model performance across domains. (3) A dynamic strategy for improving label assignment in modern object detection models is proposed. Rather than relying on fixed or statistics-based adaptive thresholds, a dynamic paradigm is introduced to define positive and negative samples. This allows more high-quality samples to be selected as positives, reducing the gap between classification and IoU scores and producing more accurate bounding boxes. (4) An efficient hybrid architecture combining Vision Transformers and convolutional layers is introduced for object recognition, particularly for small datasets. Lightweight depth-wise convolution modules bypass the entire Transformer block to capture local details that the Transformer backbone might overlook. The majority of the computations and parameters remain within the Transformer architecture, resulting in significantly improved performance with minimal overhead. (5) An innovative Multi-Overlapped-Head Self-Attention mechanism is introduced to enhance information exchange between heads in the Multi-Head Self-Attention mechanism of Vision Transformers. By overlapping adjacent heads during self-attention computation, information can flow between heads, leading to further improvements in vision recognition.


Faris El-Katri

Source Separation using Sparse Bayesian Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
James Stiles


Abstract

Wireless communication in recent decades has allowed for a substantial increase in both the speed and capacity of information which may be transmitted over large distances. However, given the expanding societal needs coupled with a finite available spectrum, the question arises of how to increase the efficiency by which information may be transmitted. One natural answer to this question lies in spectrum sharing—that is, in allowing multiple noncooperative agents to inhabit the same spectrum bands. In order to achieve this, we must be able to reliably separate the desired signals from those of other agents in the background. However, since our agents are noncooperative, we must develop a model-agnostic approach at tackling this problem. For this work, we will consider cohabitation between radar signals and communication signals, with the former being the desired signal and the latter being the noncooperative agent. In order to approach such problems involving highly underdetermined linear systems, we propose utilizing Sparse Bayesian Learning and present our results on selected problems. 


Koyel Pramanick

Detect Evidence of Compiler Triggered Security Measures in Binary Code

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

The primary goal of this thesis is to develop and explore techniques to identify security measures added by compilers in software binaries. These measures, added automatically during the build process, include runtime security checks like stack canaries, AddressSanitizer (ASan), and Control Flow Integrity (CFI), which help protect against memory errors, buffer overflows, and control flow attacks. This work also investigates how unresolved compiler warnings, especially those related to security, can be identified in binaries when the source code is unavailable. By studying the patterns and markers left by these compiler features, this thesis provides methods to analyze and verify the security provisions embedded in software binaries. These efforts aim to bridge the gap between compile-time diagnostics and binary-level analysis, offering a way to better understand the security protections applied during software compilation. Ultimately, this work seeks to make software more transparent and give users the tools to independently assess the security measures present in compiled software, fostering greater trust and accountability in software systems.


Srinitha Kale

AUTOMATING SYMBOL RECOGNITION IN SPOT IT: ADVANCING AI-POWERED DETECTION

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Esam El-Araby
Prasad Kulkarni


Abstract

The "Spot It!" game, featuring 55 cards each with 8 unique symbols, presents a complex challenge of identifying a single matching symbol between any two cards. Addressing this challenge, machine learning has been employed to automate symbol recognition, enhancing gameplay and extending applications into areas like pattern recognition and visual search. Due to the scarcity of available datasets, a comprehensive collection of 57 distinct Spot It symbols was created, with each class consisting of 1,800 augmented images. These images were manipulated through techniques such as scaling, rotation, and resizing to represent various visual scenarios. Then developed a convolutional neural network (CNN) with five convolutional layers, batch normalization, and dropout layers, and employed the Adam optimizer to train model to accurately recognize these symbols. The robust dataset included over 102,600 images, each subject to extensive augmentation to improve the model's ability to generalize across different orientation and scaling conditions. 

The model was evaluated using 55 scanned "Spot It!" cards, where symbols were extracted and preprocessed for prediction. It achieved high accuracy in symbol identification, demonstrating significant resilience to common challenges such as rotations and scaling. This project illustrates the effective integration of data augmentation, deep learning, and computer vision techniques in tackling complex pattern recognition tasks, proving that artificial intelligence can significantly enhance traditional gaming experiences and create new opportunities in various fields. This project delves into the design, implementation, and testing of the CNN, providing a detailed analysis of its performance and highlighting its potential as a transformative tool in image recognition and categorization.


Sudha Chandrika Yadlapalli

BERT-Driven Sentiment Analysis: Automated Course Feedback Classification and Ratings

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Automating the analysis of unstructured textual data, such as student course feedback, is crucial for gaining actionable insights. This project focuses on developing a sentiment analysis system leveraging the DeBERTa-v3-base model, a variant of BERT (Bidirectional Encoder Representations from Transformers), to classify feedback sentiments and generate corresponding ratings on a 1-to-5 scale.

A dataset of 100,000+ student reviews was preprocessed and fine-tuned on the model to handle class imbalances and capture contextual nuances. Training was conducted on high-performance A100 GPUs, which enhanced computational efficiency and reduced training times significantly. The trained BERT sentiment model demonstrated superior performance compared to traditional machine learning models, achieving ~82% accuracy in sentiment classification.

The model was seamlessly integrated into a functional web application, providing a streamlined approach to evaluate and visualize course reviews dynamically. Key features include a course ratings dashboard, allowing students to view aggregated ratings for each course, and a review submission functionality where new feedback is analyzed for sentiment in real-time. For the department, an admin page provides secure access to detailed analytics, such as the distribution of positive and negative reviews, visualized trends, and the access to view individual course reviews with their corresponding sentiment scores.

This project includes a comprehensive pipeline, starting from data preprocessing and model training to deploying an end-to-end application. Traditional machine learning models, such as Logistic Regression and Decision Tree, were initially tested but yielded suboptimal results. The adoption of BERT, trained on a large dataset of 100k reviews, significantly improved performance, showcasing the benefits of advanced transformer-based models for sentiment analysis tasks.


Shriraj K. Vaidya

Exploring DL Compiler Optimizations with TVM

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Dongjie Wang
Zijun Yao


Abstract

Deep Learning (DL) compilers, also called Machine Learning (ML) compilers, take a computational graph representation of a ML model as input and apply graph-level and operator-level optimizations to generate optimized machine-code for different supported hardware architectures. DL compilers can apply several graph-level optimizations, including operator fusion, constant folding, and data layout transformations to convert the input computation graph into a functionally equivalent and optimized variant. The DL compilers also perform kernel scheduling, which is the task of finding the most efficient implementation for the operators in the computational graph. While many research efforts have focused on exploring different kernel scheduling techniques and algorithms, the benefits of individual computation graph-level optimizations are not as well studied. In this work, we employ the TVM compiler to perform a comprehensive study of the impact of different graph-level optimizations on the performance of DL models on CPUs and GPUs. We find that TVM's graph optimizations can improve model performance by up to 41.73% on CPUs and 41.6% on GPUs, and by 16.75% and 21.89%, on average, on CPUs and GPUs, respectively, on our custom benchmark suite.


Rizwan Khan

Fatigue crack segmentation of steel bridges using deep learning models - a comparative study.

When & Where:


Learned Hall, Room 3131

Committee Members:

David Johnson, Chair
Hongyang Sun



Abstract

Structural health monitoring (SHM) is crucial for maintaining the safety and durability of infrastructure. To address the limitations of traditional inspection methods, this study leverages cutting-edge deep learning-based segmentation models for autonomous crack identification. Specifically, we utilized the recently launched YOLOv11 model, alongside the established DeepLabv3+ model for crack segmentation. Mask R-CNN, a widely recognized model in crack segmentation studies, is used as the baseline approach for comparison. Our approach integrates the CREC cropping strategy to optimize dataset preparation and employs post-processing techniques, such as dilation and erosion, to refine segmentation results. Experimental results demonstrate that our method—combining state-of-the-art models, innovative data preparation strategies, and targeted post-processing—achieves superior mean Intersection-over-Union (mIoU) performance compared to the baseline, showcasing its potential for precise and efficient crack detection in SHM systems


Zhaohui Wang

Enhancing Security and Privacy of IoT Systems: Uncovering and Resolving Cross-App Threats

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to our daily lives, enabling users to customize automation rules and develop IoT apps to meet their specific needs. However, as IoT devices interact with multiple apps across various platforms, users are exposed to complex security and privacy risks. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats.

In this work, we introduce two innovative approaches to uncover and address these concealed threats in IoT environments. The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app chains that are formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate the risks based on inferences. Additionally, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks. To systematically detect cross-app interference threats, we propose to apply principles of logical fallacies to formalize conflicts in rule interactions. We identify and categorize cross-app interference by examining relations between events in IoT apps. We define new risk metrics for evaluating the severity of these interferences and use optimization techniques to resolve interference threats efficiently. This approach ensures comprehensive coverage of cross-app interference, offering a systematic solution compared to the ad hoc methods used in previous research.

To enhance forensic capabilities within IoT, we integrate blockchain technology to create a secure, immutable framework for digital forensics. This framework enables the identification, tracing, storage, and analysis of forensic information to detect anomalous behavior. Furthermore, we developed a large-scale, manually verified, comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. Each of these approaches has been evaluated using our dataset of real-world apps, collectively offering valuable insights and tools for enhancing IoT security and privacy against cross-app threats.