Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Christopher Ord

A Hardware-Agnostic Simultaneous Transmit And Receive (STAR) Architecture for the Transmission of Non-Repeating FMCW Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rachel Jarvis, Chair
Shannon Blunt
Patrick McCormick


Abstract

With the increasing congestion of the usable RF spectrum, it is increasingly necessary for communication and radar systems to share the same frequencies without disturbing one another. To accomplish this, research has focused on designing a class of non-repeating radar waveforms that appear as noise at the receiver of uncooperative systems, but the peak power from high-power pulsed systems can still overwhelm nearby in-band systems. Therefore, to minimize peak power while maximizing the total energy on target, radar systems must transition to operating at a 100% duty cycle, which inherently requires Simultaneous Transmit and Receive (STAR) operation.

One inherent difficulty when operating monostatic STAR systems is the direct path coupling interference that can saturate a number of components in the radar’s receive chain, which makes digital processing methods that remove this interference ineffective. This thesis proposes a method to reduce the self-interference between the radar’s transmitter in receiver prior to the receiver’s sensitive components to increase the power that the radar can transmit at. By using a combination of tests that manipulate the timing, phase, and magnitude of a secondary waveform that is injected into the radar just before the receiver, upwards of 35.0 dB of self-interference cancellation is achieved for radar waveforms with bandwidths of up to 100 MHz at both S-band and X-band in both simulation and open-air testing.


Fatima Al-Shaikhli

Optical Fiber Measurements: Leveraging Coherent FMCW Techniques

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical fiber technology have proven to be invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical fiber sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceiver systems to develop novel measurement techniques for characterizing optical fiber properties. Specifically, our goal is to leverage a digitally chirped frequency-modulated continuous wave (FMCW) to extract detailed information about optical fiber characteristics, as well as target range. Through this approach, we aim to enable more accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) self-homodyne coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection, and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.         

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Multi-target detection is demonstrated experimentally, and while only amplitude modulation is required in the LiDAR transmitter, the phase-diversity coherent receiver enables simultaneous detection of both range and velocity for each target, along with the sign of the target’s velocity.

In addition, we demonstrate a polarization-sensitive OFDR system utilizing a commercially available digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately , a chirping bandwidth, and a measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we can measure birefringence vectors along the fiber, providing not only the magnitude of birefringence but also the direction of any external pressure applied to the fiber.


Landen Doty

Assessing the Effects of Source Language on Binary Similarity Tools

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Alex Bardas
Drew Davidson

Abstract

Binary similarity is a fundamental technique that enables software analysis practitioners to compare machine-level code at scale and with fine granularity. With application in software reverse engineering, vulnerability research, malware attribution and more, state-of-the-art binary similarity tools have undergone thorough research and development to account for variations in compilers, optimizations, machine architectures, and even obfuscations. And, although these tools aim to compare and detect binary-level code segments generated from similar or identical source code, no preexisting work has investigated the effects of source languages other than C and C++. This thesis addresses this research gap by presenting a thorough investigation of SOTA binary similarity tools when applied to modern compiled languages, Rust and Golang.

To adequately evaluate the capabilities of the available binary similarity approaches, this work includes three distinct tools - BSim, a new component of the Ghidra Software Reverse Engineering Framework, which utilizes a clustering based similarity mechanism; BinDiff, an industry-recognized tool using graph-based comparisons; and jTrans, a BERT-based model fine-tuned to the binary similarity task. First, to enable this work, we introduce a new dataset of Rust and Golang binaries compiled from leading open-source projects in the Homebrew and Arch Linux repositories. Comprised of 800 binaries and over 1 million functions, this dataset was built to represent a broad range of implementation styles, application diversity, and source language features. Next, the main investigation of this thesis is presented wherein we asses each approach's ability to accurately report semantically equivalent functions compiled from the same source code. Results across the three tools reveal a systematic degradation of precision when comparing binaries produced by Rust and Go rather than those produced by C and C++. Finally, we provide a technical demonstration which highlights the implications of these results and discuss near- and long-term solutions to more adequately equip binary analysis practitioners.  
 


Liangqin Ren

Understanding and Mitigating Security Risks towards Trustworthy Deep Learning Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Bo Luo
Zijun Yao
Xinmai Yang

Abstract

Deep learning is widely used in healthcare, finance, and other critical domains, raising concerns about system trustworthiness. However, deep learning models and data still face three types of critical attacks: model theft, identity impersonation, and abuse of AI-generated content (AIGC). To address model theft, homomorphic encryption has been explored for privacy-preserving inference, but it remains highly inefficient. To counter identity impersonation, prior work focuses on detection, disruption, and tracing—yet fails to protect source and target images simultaneously. To prevent AIGC abuse, methods like evaluation, watermarking, and machine unlearning exist, but text-driven image editing remains largely unprotected.

This report addresses the above challenges through three key designs. First, to enable privacy-preserving inference while accelerating homomorphic encryption, we propose PrivDNN, which selectively encrypts the most critical model parameters, significantly reducing encrypted operations. We design a selection score to evaluate neuron importance and use a greedy algorithm to iteratively secure the most impactful neurons. Across four models and datasets, PrivDNN reduces encrypted operations by 85%–98%, and cuts inference time and memory usage by over 97% while preserving accuracy and privacy. Second, to counter identity impersonation in deepfake face-swapping, where both the source and target can be exploited, we introduce PhantomSeal, which embeds invisible perturbations to encode a hidden “cloak” identity. When used as a target, the resulting content displays visible artifacts; when used as a source, the generated deepfake is altered to resemble the cloak identity. Evaluations across two generations of deepfake face-swapping show that PhantomSeal reduces attack success from 97% to 0.8%, with 95% of outputs recognized as the cloak identity, providing robust protection against manipulation. Third, to prevent AIGC abuse, we construct a comprehensive dataset, perform large-scale human evaluation, and establish a benchmark for detecting AI-generated artwork to better understand abuse risks in AI-generated content. Building on this direction, we propose Protecting Copyright against Image Editing (PCIE) to address copyright infringement in text-driven image editing. PCIE embeds an invisible copyright mark into the original image, which transforms into a visible watermark after text-driven editing to automatically reveal ownership upon unauthorized modification.


Andrew Stratmann

Efficient Index-Based Multi-User Scheduling for Mobile mmWave Networks: Balancing Channel Quality and User Experience

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Morteza Hashemi, Chair
Prasad Kulkarni
Erik Perrins


Abstract

Millimeter Wave (mmWave) communication technologies have the potential to establish high data rates for next-generation wireless networks, as well as enable novel applications that were previously untenable due to high throughput requirements.  Yet reliable and efficient mmWave communication remains challenged by intermittent link quality due to user mobility and frequent line-of-sight (LoS) blockage, thereby making the links unavailable or more costly to use.  These factors are further exacerbated in multi-user settings where beam alignment overhead, limited RF chains, and heterogeneous user requirements must be balanced.  In this work, we present a hybrid multi-user scheduling solution that jointly accounts for mobility-and blockage-induced unavailability to enhance user experience in mmWave video streaming applications.  Our approach integrates two key components: (i) a blockage-aware scheduling strategy modeled via a Restless Multi-Armed Bandit (RMAB) formulation and prioritized using Whittle Indexing, and (ii) a mobility-aware geometric model that estimates beam alignment overhead cost as a function of receiver motion.  We develop a comprehensive and efficient index-based scheduler that fuses these models and leverages contextual information, such as receiver distance, mobility history, and queue state, to schedule multiple users in order to maximize throughput. Simulation results demonstrate that our approach reduces system queue backlog and improves fairness compared to round-robin and traditional index-based baselines.


Tianxiao Zhang

Efficient and Effective Object Detection and Recognition: from Convolutions to Transformers

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Bo Luo, Chair
Prasad Kulkarni
Fengjun Li
Cuncong Zhong
Guanghui Wang

Abstract

With the development of Convolutional Neural Networks (CNNs), computer vision has entered a new era, significantly enhancing the performance of tasks such as image classification, object detection, segmentation, and recognition. Furthermore, the introduction of Transformer architectures has brought the attention mechanism and a global perspective to computer vision, advancing the field to a new level. The inductive bias inherent in CNNs makes convolutional models particularly well-suited for processing images and videos. On the other hand, the attention mechanism in Transformer models allows them to capture global relationships between tokens. While Transformers often require more data and longer training periods compared to their convolutional counterparts, they have the potential to achieve comparable or even superior performance when the constraints of data availability and training time are mitigated.

In this work, we propose more efficient and effective CNNs and Transformers to increase the performance of object detection and recognition. (1) A novel approach is proposed for real-time detection and tracking of small golf balls by combining object detection with the Kalman filter. Several classical object detection models were implemented and compared in terms of detection precision and speed. (2) To address the domain shift problem in object detection, we employ generative adversarial networks (GANs) to generate images from different domains. The original RGB images are concatenated with the corresponding GAN-generated images to form a 6-channel representation, improving model performance across domains. (3) A dynamic strategy for improving label assignment in modern object detection models is proposed. Rather than relying on fixed or statistics-based adaptive thresholds, a dynamic paradigm is introduced to define positive and negative samples. This allows more high-quality samples to be selected as positives, reducing the gap between classification and IoU scores and producing more accurate bounding boxes. (4) An efficient hybrid architecture combining Vision Transformers and convolutional layers is introduced for object recognition, particularly for small datasets. Lightweight depth-wise convolution modules bypass the entire Transformer block to capture local details that the Transformer backbone might overlook. The majority of the computations and parameters remain within the Transformer architecture, resulting in significantly improved performance with minimal overhead. (5) An innovative Multi-Overlapped-Head Self-Attention mechanism is introduced to enhance information exchange between heads in the Multi-Head Self-Attention mechanism of Vision Transformers. By overlapping adjacent heads during self-attention computation, information can flow between heads, leading to further improvements in vision recognition.


Faris El-Katri

Source Separation using Sparse Bayesian Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
James Stiles


Abstract

Wireless communication in recent decades has allowed for a substantial increase in both the speed and capacity of information which may be transmitted over large distances. However, given the expanding societal needs coupled with a finite available spectrum, the question arises of how to increase the efficiency by which information may be transmitted. One natural answer to this question lies in spectrum sharing—that is, in allowing multiple noncooperative agents to inhabit the same spectrum bands. In order to achieve this, we must be able to reliably separate the desired signals from those of other agents in the background. However, since our agents are noncooperative, we must develop a model-agnostic approach at tackling this problem. For this work, we will consider cohabitation between radar signals and communication signals, with the former being the desired signal and the latter being the noncooperative agent. In order to approach such problems involving highly underdetermined linear systems, we propose utilizing Sparse Bayesian Learning and present our results on selected problems. 


Past Defense Notices

Dates

Koyel Pramanick

Detect Evidence of Compiler Triggered Security Measures in Binary Code

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

The primary goal of this thesis is to develop and explore techniques to identify security measures added by compilers in software binaries. These measures, added automatically during the build process, include runtime security checks like stack canaries, AddressSanitizer (ASan), and Control Flow Integrity (CFI), which help protect against memory errors, buffer overflows, and control flow attacks. This work also investigates how unresolved compiler warnings, especially those related to security, can be identified in binaries when the source code is unavailable. By studying the patterns and markers left by these compiler features, this thesis provides methods to analyze and verify the security provisions embedded in software binaries. These efforts aim to bridge the gap between compile-time diagnostics and binary-level analysis, offering a way to better understand the security protections applied during software compilation. Ultimately, this work seeks to make software more transparent and give users the tools to independently assess the security measures present in compiled software, fostering greater trust and accountability in software systems.


Srinitha Kale

AUTOMATING SYMBOL RECOGNITION IN SPOT IT: ADVANCING AI-POWERED DETECTION

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Esam El-Araby
Prasad Kulkarni


Abstract

The "Spot It!" game, featuring 55 cards each with 8 unique symbols, presents a complex challenge of identifying a single matching symbol between any two cards. Addressing this challenge, machine learning has been employed to automate symbol recognition, enhancing gameplay and extending applications into areas like pattern recognition and visual search. Due to the scarcity of available datasets, a comprehensive collection of 57 distinct Spot It symbols was created, with each class consisting of 1,800 augmented images. These images were manipulated through techniques such as scaling, rotation, and resizing to represent various visual scenarios. Then developed a convolutional neural network (CNN) with five convolutional layers, batch normalization, and dropout layers, and employed the Adam optimizer to train model to accurately recognize these symbols. The robust dataset included over 102,600 images, each subject to extensive augmentation to improve the model's ability to generalize across different orientation and scaling conditions. 

The model was evaluated using 55 scanned "Spot It!" cards, where symbols were extracted and preprocessed for prediction. It achieved high accuracy in symbol identification, demonstrating significant resilience to common challenges such as rotations and scaling. This project illustrates the effective integration of data augmentation, deep learning, and computer vision techniques in tackling complex pattern recognition tasks, proving that artificial intelligence can significantly enhance traditional gaming experiences and create new opportunities in various fields. This project delves into the design, implementation, and testing of the CNN, providing a detailed analysis of its performance and highlighting its potential as a transformative tool in image recognition and categorization.


Sudha Chandrika Yadlapalli

BERT-Driven Sentiment Analysis: Automated Course Feedback Classification and Ratings

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Automating the analysis of unstructured textual data, such as student course feedback, is crucial for gaining actionable insights. This project focuses on developing a sentiment analysis system leveraging the DeBERTa-v3-base model, a variant of BERT (Bidirectional Encoder Representations from Transformers), to classify feedback sentiments and generate corresponding ratings on a 1-to-5 scale.

A dataset of 100,000+ student reviews was preprocessed and fine-tuned on the model to handle class imbalances and capture contextual nuances. Training was conducted on high-performance A100 GPUs, which enhanced computational efficiency and reduced training times significantly. The trained BERT sentiment model demonstrated superior performance compared to traditional machine learning models, achieving ~82% accuracy in sentiment classification.

The model was seamlessly integrated into a functional web application, providing a streamlined approach to evaluate and visualize course reviews dynamically. Key features include a course ratings dashboard, allowing students to view aggregated ratings for each course, and a review submission functionality where new feedback is analyzed for sentiment in real-time. For the department, an admin page provides secure access to detailed analytics, such as the distribution of positive and negative reviews, visualized trends, and the access to view individual course reviews with their corresponding sentiment scores.

This project includes a comprehensive pipeline, starting from data preprocessing and model training to deploying an end-to-end application. Traditional machine learning models, such as Logistic Regression and Decision Tree, were initially tested but yielded suboptimal results. The adoption of BERT, trained on a large dataset of 100k reviews, significantly improved performance, showcasing the benefits of advanced transformer-based models for sentiment analysis tasks.


Shriraj K. Vaidya

Exploring DL Compiler Optimizations with TVM

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Dongjie Wang
Zijun Yao


Abstract

Deep Learning (DL) compilers, also called Machine Learning (ML) compilers, take a computational graph representation of a ML model as input and apply graph-level and operator-level optimizations to generate optimized machine-code for different supported hardware architectures. DL compilers can apply several graph-level optimizations, including operator fusion, constant folding, and data layout transformations to convert the input computation graph into a functionally equivalent and optimized variant. The DL compilers also perform kernel scheduling, which is the task of finding the most efficient implementation for the operators in the computational graph. While many research efforts have focused on exploring different kernel scheduling techniques and algorithms, the benefits of individual computation graph-level optimizations are not as well studied. In this work, we employ the TVM compiler to perform a comprehensive study of the impact of different graph-level optimizations on the performance of DL models on CPUs and GPUs. We find that TVM's graph optimizations can improve model performance by up to 41.73% on CPUs and 41.6% on GPUs, and by 16.75% and 21.89%, on average, on CPUs and GPUs, respectively, on our custom benchmark suite.


Rizwan Khan

Fatigue crack segmentation of steel bridges using deep learning models - a comparative study.

When & Where:


Learned Hall, Room 3131

Committee Members:

David Johnson, Chair
Hongyang Sun



Abstract

Structural health monitoring (SHM) is crucial for maintaining the safety and durability of infrastructure. To address the limitations of traditional inspection methods, this study leverages cutting-edge deep learning-based segmentation models for autonomous crack identification. Specifically, we utilized the recently launched YOLOv11 model, alongside the established DeepLabv3+ model for crack segmentation. Mask R-CNN, a widely recognized model in crack segmentation studies, is used as the baseline approach for comparison. Our approach integrates the CREC cropping strategy to optimize dataset preparation and employs post-processing techniques, such as dilation and erosion, to refine segmentation results. Experimental results demonstrate that our method—combining state-of-the-art models, innovative data preparation strategies, and targeted post-processing—achieves superior mean Intersection-over-Union (mIoU) performance compared to the baseline, showcasing its potential for precise and efficient crack detection in SHM systems


Zhaohui Wang

Enhancing Security and Privacy of IoT Systems: Uncovering and Resolving Cross-App Threats

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to our daily lives, enabling users to customize automation rules and develop IoT apps to meet their specific needs. However, as IoT devices interact with multiple apps across various platforms, users are exposed to complex security and privacy risks. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats.

In this work, we introduce two innovative approaches to uncover and address these concealed threats in IoT environments. The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app chains that are formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate the risks based on inferences. Additionally, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks. To systematically detect cross-app interference threats, we propose to apply principles of logical fallacies to formalize conflicts in rule interactions. We identify and categorize cross-app interference by examining relations between events in IoT apps. We define new risk metrics for evaluating the severity of these interferences and use optimization techniques to resolve interference threats efficiently. This approach ensures comprehensive coverage of cross-app interference, offering a systematic solution compared to the ad hoc methods used in previous research.

To enhance forensic capabilities within IoT, we integrate blockchain technology to create a secure, immutable framework for digital forensics. This framework enables the identification, tracing, storage, and analysis of forensic information to detect anomalous behavior. Furthermore, we developed a large-scale, manually verified, comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. Each of these approaches has been evaluated using our dataset of real-world apps, collectively offering valuable insights and tools for enhancing IoT security and privacy against cross-app threats.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our algorithm using the multidimensional Poisson equation as a case study. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. We will also focus on extending these techniques to PDEs relevant to computational fluid dynamics and financial modeling, further bridging the gap between theoretical quantum algorithms and practical applications.


Hao Xuan

A Unified Algorithmic Framework for Biological Sequence Alignment

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Sequence alignment is pivotal in both homology searches and the mapping of reads from next-generation sequencing (NGS) and third-generation sequencing (TGS) technologies. Currently, the majority of sequence alignment algorithms utilize the “seed-and-extend” paradigm, designed to filter out unrelated or nonhomologous sequences when no highly similar subregions are detected. A well-known implementation of this paradigm is BLAST, one of the most widely used multipurpose aligners. Over time, this paradigm has been optimized in various ways to suit different alignment tasks. However, while these specialized aligners often deliver high performance and efficiency, they are typically restricted to one or few alignment applications. To the best of our knowledge, no existing aligner can perform all alignment tasks while maintaining superior performance and efficiency.

In this work, we introduce a unified sequence alignment framework to address this limitation. Our alignment framework is built on the seed-and-extend paradigm but incorporates novel designs in its seeding and indexing components to maximize both flexibility and efficiency. The resulting software, the Versatile Alignment Toolkit (VAT), allows the users to switch seamlessly between nearly all major alignment tasks through command-line parameter configuration. VAT was rigorously benchmarked against leading aligners for DNA and protein homolog searches, NGS and TGS read mapping, and whole-genome alignment. The results demonstrated VAT’s top-tier performance across all benchmarks, underscoring the feasibility of using a unified algorithmic framework to handle diverse alignment tasks. VAT can simplify and standardize bioinformatic analysis workflows that involve multiple alignment tasks. 


Venkata Sai Krishna Chaitanya Addepalli

A Comprehensive Approach to Facial Emotion Recognition: Integrating Established Techniques with a Tailored Model

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Facial emotion recognition has become a pivotal application of machine learning, enabling advancements in human-computer interaction, behavioral analysis, and mental health monitoring. Despite its potential, challenges such as data imbalance, variation in expressions, and noisy datasets often hinder accurate prediction.

 This project presents a novel approach to facial emotion recognition by integrating established techniques like data augmentation and regularization with a tailored convolutional neural network (CNN) architecture. Using the FER2013 dataset, the study explores the impact of incremental architectural improvements, optimized hyperparameters, and dropout layers to enhance model performance.

 The proposed model effectively addresses issues related to data imbalance and overfitting while achieving enhanced accuracy and precision in emotion classification. The study underscores the importance of feature extraction through convolutional layers and optimized fully connected networks for efficient emotion recognition. The results demonstrate improvements in generalization, setting a foundation for future real-time applications in diverse fields. 


Tejarsha Arigila

Benchmarking Aggregation Free Federated Learning using Data Condensation: Comparison with Federated Averaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Bo Luo
Sumaiya Shomaji


Abstract

This project investigates the performance of Federated Learning Aggregation-Free (FedAF) compared to traditional federated learning methods under non-independent and identically distributed (non-IID) data conditions, characterized by Dirichlet distribution parameters (alpha = 0.02, 0.05, 0.1). Utilizing the MNIST and CIFAR-10 datasets, the study benchmarks FedAF against Federated Averaging (FedAVG) in terms of accuracy, convergence speed, communication efficiency, and robustness to label and feature skews.  

Traditional federated learning approaches like FedAVG aggregate locally trained models at a central server to form a global model. However, these methods often encounter challenges such as client drift in heterogeneous data environments, which can adversely affect model accuracy and convergence rates. FedAF introduces an innovative aggregation-free strategy wherein clients collaboratively generate a compact set of condensed synthetic data. This data, augmented by soft labels from the clients, is transmitted to the server, which then uses it to train the global model. This approach effectively reduces client drift and enhances resilience to data heterogeneity. Additionally, by compressing the representation of real data into condensed synthetic data, FedAF improves privacy by minimizing the transfer of raw data.  

The experimental results indicate that while FedAF converges faster, it struggles to stabilize under highly heterogenous environments due to limited real data representation capacity of condensed synthetic data.