Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

SOUMYAROOP NANDI

Robust Object Tracking and Adaptive Detection for Autonavigation of Unmanned Aerial Vehicle

When & Where:


246 Nichols Hall

Committee Members:

Richard Wang, Chair
Jim Rowland
Jim Stiles


Abstract

Object detection and tracking is an important research topic in the computer vision field with numerous practical applications. Although great progress has been made, both in object detection and tracking over the last decade, it is still a big challenge in real-time applications like automated navigation of an unmanned aerial vehicle and collision avoidance with a forward looking camera. An automated and robust object tracking approach is proposed by integrating a kernelized correlation filter framework with an adaptive object detection technique based on minimum barrier distance transform. The proposed tracker is automatically initialized with salient object detection and the detected object is localized in the image frame with a rectangular bounding box. An adaptive object redetection strategy is proposed to refine the location and boundary of the object, when the tracking correlation response drops below a certain threshold. In addition, reliable pre-processing and post-processing methods are applied on the image frames to accurately localize the object. Extensive quantitative and qualitative experimentation on challenging datasets have been performed to verify the proposed approach. Furthermore, the proposed approach is comprehensively examined with six other recent state-of-the-art¬ trackers, demonstrating that the proposed approach greatly outperforms these trackers, both in terms of tracking speed and accuracy. 


TRUC ANH NGUYEN

ResTP: A Configurable and Adaptable Multipath Transport Protocol for Future Internet Resilience

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Bo Luo
Gary Minden
Justin Rohrer

Abstract

With the motivation to develop a resilient and survivable networking system that can cope with challenges posed by the rapid growth in networking technologies and use paradigms and the impairments of TCP and UDP, we propose a general-purpose, configurable and adaptable multipath-capable transport-layer protocol called ResTP. By supporting cross- layering, ResTP allows service tuning by the upper application layer while promptly reacting to the underlying network dynamics by using the feedback from the lower layer. Our composable ResTP not only has the flexibility to provide services to different application classes operating across various network environments, its selection of mechanisms also increases the resilience level of the system in which it is deployed since the design of ResTP is guided by a set of principles derived from the ResiliNets framework. Moreover, the implementation of ResTP employs modular programming to minimize the complexity while increasing its extensibility. Hence, the addition of any new algorithms to ResTP would require only some small changes to the existing code. Last but not least, many ResTP components, including its header, are optimized to reduce unnecessary overhead. In this proposal, we introduce ResTP’s key functionalities, present some preliminary simulation results of ResTP in comparison with TCP and UDP in ns-3, and discuss our plan towards the completion and analysis of the protocol. The results show that ResTP is a promising transport-layer protocol for Future Internet (FI) resilience. 

 

 


JUSTIN DAWSON

Remote Monads and Remote Applicatives

When & Where:


246 Nichols Hall

Committee Members:

Andy Gill, Chair
Perry Alexander
Prasad Kulkarni
Bo Luo
Kyle Camarda

Abstract

Remote Procedure Calls (RPCs) are an integral part of the internet of things. After the introduction of RPCs, there have been a number of optimizations to amortize the network overhead, including the addition of asynchronous calls and batching requests together. In Haskell, we have discovered a principled way to compose procedure calls together using the Remote Monad mechanism. A remote monad has primitive operations that evaluate outside the local runtime system and is a generalization of RPCs. Remote Monads use natural transformations to make modular and composable network stacks which can automatically bundle requests into packets in a principled way, making them easy to adapt for a number of applications. We have created a framework which has been successfully used to implement JSON-RPC, a graphical browser-based library, an efficient bytestring implementation, and database queries. The result of this investigation is that the cost of implementing bundling for remote monads can be amortized almost for free, if given a user-supplied packet transportation mechanism.

 

 


GHAITH SHABSIGH

Covert Communications in the RF Band of Primary Wireless Networks

When & Where:


250 Nichols Hall

Committee Members:

Victor Frost, Chair
Shannon Blunt
Lingjia Liu
Erik Perrins
Tyrone Duncan

Abstract

Covert systems are designed to operate at a low probability of detection in order to provide system protection at the physical layer level. The classical approach to covert communications aims at hiding the covert signal in noise by lowering the power spectral density of the signal to a level that makes it indistinguishable from that of the noise. However, the increasing demand for modern covert systems that can provide better protection against intercept receivers (IRs) and provides higher data rates has shifted the focus to the design of Ad-Hoc covert networks (ACNs) that can hide their transmission in the RF spectrum of primary networks (PNs). The early work on exploiting the RF band of other wireless systems has been promising; however, the difficulties in modeling such environments, and analyzing the impact on/from the primary network have limited the work on this crucial subject. In this work, we provide the first comprehensive analyses of a covert network that exploits the RF band of an OFDM-based primary network to achieve covertness. A spectrum access algorithm is presented which would allow the ACN to transmit in the RF spectrum of the PN with minimum interference. Next, we use stochastic geometry to model both the OFDM-based PN as well as the ACN. Using stochastic geometry would also allow us to provide a comprehensive analysis for two metrics, namely an aggregate metric and a ratio metric. These two metrics quantify the covertness and performance of the covert network from the perspective of the IR and the ACN, respectively. The two metrics are used to determine the detectability limits of an ACN by an IR. The two metrics along with the proposed spectrum access algorithm will be used to provide a comprehensive discussion the design the ACN for a target covertness level, and analyze the effect of the PN parameters on the ACN expected performance. This work also addresses the question of trade-off between the ACN covertness and its achievable throughput. The overall research work illustrates the strong potential for using man-made transmissions as a mask for covert communications. 


RAHUL BAID

Applying Machine Learning through Programming Labs

When & Where:


2001B Eaton Hall

Committee Members:

Nicole Beckage, Chair
Jerzy Grzymala-Busse
Fengjun Li


Abstract

The goal of this project is to bring together the complexity of core mathematics with programming abilities to code machine learning algorithms that can be incorporated into programming labs and exercises for graduate and undergraduate machine learning students. 
The aim of building the labs is to provide students with a learning tool to gain a better understanding of the inner workings of machine learning algorithms. Additionally, the labs aim to expose what challenges each algorithm can bring on its own. SAS Analytics brings into perspective machine learning methods by explaining that machine learning enables “high-value predictions that can guide better decisions and smart actions in real time without human intervention.”[2] Machine learning methods can be applied to a wide spectrum of domains and therefore, rather than attempting to cover all the algorithms, I have incorporated the algorithms that are widely applicable and explore key mathematical concepts. These algorithms for machine learning labs will give the students a learning approach to solving the intricacies of the underlying mathematical principles and will also help students to make better decisions about algorithm design and develop more accurate model predictions. 
Since each machine learning lab focuses on a particular algorithm, each program comes with a different challenge. To write these labs, I first had to master the material, which entailed finding the purpose of the algorithm and the statistical knowledge involved. Through these findings, I developed labs with specific designs, datasets, and evaluation metrics. A key difference between this approach and many other machine learning textbook approaches is that the students are building up these individual labs from scratch. They are asked to write, for a variety of different algorithms, the cost/loss function, the optimization procedure and even basic evaluation metrics. While it may be easier to call a function within a programming language, it is also easy to violate assumptions or requirements of these algorithms. By programming algorithms from scratch, as students must do in this lab, they are better able to draw parallels between the applied and theoretical underpinnings of these algorithms. 


AKHILESH MISHRA

Multi-look SAR Processing and Array Optimization Applied to Radio Echo Sounding of Ice Sheets

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Stephen Yan
Prasad Gogineni


Abstract

Increase in sea level is a problem of global importance because of its impact on infrastructure and residents in coastal regions. Airborne and satellite observations have shown that the margins of Greenland and Antarctic ice sheets are melting and retreating, steadily increasing their contribution to sea level rise over the last decade. To understand the ice dynamics and develop models to generate accurate estimates of ice sheets’ future contribution to sea level rise, more information on ice thickness and basal conditions are required. Airborne ice penetrating radars are routinely deployed on long-range aircraft to perform ice thickness measurements, which are needed to derive information on bed topography and basal conditions. Acquiring useful radar reflections from the ice-bed interface is very challenging in regions where ice sheets are exhibiting the most rapid changes because returns from the ice-bed are very weak and often masked by the off nadir surface clutter. Advanced signal processing techniques, such as Synthetic Aperture Radar (SAR) and array processing, are required to filter the clutter and extract weak bed echoes buried in the noise. However, past attempts to detect these signals have not been completely successful because system and target-induced errors on SAR and array processing are not fully compensated. SAR processing in areas with significant surface slope degrades signal-to-noise ratio. Also, systematic and random errors in amplitude and phase between receive channels degrade the performance of array processors used to synthesize cross-track beam pattern. 
A novel Multi-look Time Domain Back Projection (MLTDBP) parallel processor has been developed to accurately model the electromagnetic wave propagation through the ice and generate echograms with better SNCR (Signal to Noise-Clutter Ratio) in the along-track dimension. A novel dynamic channel equalization method (based on null optimization) has been developed to adaptively calibrate the receive channels, giving an improved SNCR for the cross-track processing algorithms. Results from two-dimensional processing algorithms have been shown to be effective in extracting weak bed echoes, sloped internal ice layers, deep internal ice layers; and these results are also used to generate 3D ice-bed map of fast flowing Kangiata Nunaata Sermia (KNS) glacier in southwest Greenland.


SUSOBHAN DAS

Tunable Nano-photonic Devices

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Alessandro Salandrino
Chris Allen
Jim Stiles
Judy Wu

Abstract

High speed photonic systems and networks require electro-optic modulators to encode electronic signal onto optical carrier. The central focus of this research is twofold. First, tunable properties and tuning mechanisms of optical materials like Graphene, Vanadium dioxide (VO2), and Indium Tin Oxide (ITO) are characterized systematically in the 1550nm telecommunication wavelength. Then, these materials are implemented to design novel nano-photonic devices with high efficiency and miniature footprint suitable for photonic integration. 
Specifically, we experimentally investigated the complex index of graphene in near infrared (NIR) wavelength through the reflectivity measurement on a SiO2/Si substrate. The measured change of reflectivity as the function of applied gate voltage is highly correlated with the Kubo formula. Based on a fiber-optic pump-probe setup we demonstrated that short optical pulses can be translated from pump wavelength to probe wavelength through dielectric-to-metal phase transition of VO2. In this process, pump leading edge induced optical phase modulation on the probe is converted into an intensity modulation through an optical frequency discriminator. We also theoretically modeled the permittivity of ITO with different levels of doping concentration in NIR region. 
We proposed an ultra-compact electro-optic modulator based on switching plasmonic resonance “ON” and “OFF” of ITO-on-graphene via tuning of graphene chemical potential through electrical gating. The plasmonic resonance of ITO-on-graphene significantly enhances the field interaction with graphene which allows the size reduction compare to graphene based modulators without ITO. We presented a scheme of mode-multiplexed NIR modulator by tuning ITO permittivity as the function of carrier density through applied voltage. The wisely patterned ITO on top of an SOI ridge waveguide portrayed the independent modulation of two orthogonal modes simultaneously, which enhances functionality per-area. We proposed a theoretical model of tunable anisotropic metamaterial composed of periodic layers of graphene and Hafnium Oxide where transversal permittivity can be tuned via changing the chemical potential of graphene. A novel metamaterial assisted tunable photonic coupler is designed by inserting the proposed artificial tunable metamaterial in the coupling region of a waveguide coupler. The coupling efficiency can be tuned by changing the permittivity of metamaterial through electrical gating. 


PRANAV BAHL

WOLF (machine learning WOrk fLow management Framework)

When & Where:


246 Nichols Hall

Committee Members:

Luke Huan, Chair
Fengjun Li
Bo Luo


Abstract

Recently machine learning has been creating great strides in many areas of work field such as health, finance, education, sports etc., which has encouraged demand for machine learning systems. By definition machine learning automates the task of learning in terms of rule induction, classification, regression etc. This is then used to draw knowledgeable insights and to forecast an event before it actually takes place. Despite this automation, machine learning still does not automate the task of selecting the best algorithm(s) for a specific dataset. With the rapidly growing machine learning algorithms it has become difficult for novices as well as researchers to choose the best algorithm. The crux of a machine learning system is (1) to solve fundamental problems of preprocessing the data to help machine learning algorithm understand the data better; (2) to solve the problem of choosing meaningful features hence reducing the noise from the data; and (3) to choose the best resulting machine learning algorithm which is performed by doing grid search over hyperparameters of various machine learning algorithms and afterwards doing metric comparison amongst all outcomes. These are the problems addressed by Wolf. 
Automation is the fuel that drives Wolf. Automating time-consuming and repeatable tasks are the defining characteristics of the project. The rising scope of Artificial Intelligence (AI) and machine learning increases the need for automation to simplify the process, hence help researchers and data scientists dig deeper into the problem and understand it well, rather than spending time in tweaking the algorithms. The positive correlation of growing intelligence and the complexity of solutions has shifted the trend from Artificial Intelligence (AI) to Automated Intelligence, a paradigm on which Wolf is based. 
Wolf has been built to have an impact on a wider audience. The automation of machine learning pipeline saves ~40% of the work effort spent towards implementing and testing algorithm. It helps people with different levels of expertise and requirements, helps novices to identify best combinations of algorithms without having in depth knowledge of algorithms and helps researchers and businesses better their machine learning knowledge to figure out best resulting hyperparameters. 


FARHAD MAHMOOD

Modeling and Analysis of Energy Efficiency in Wireless Handset Transceiver Systems

When & Where:


250 Nichols Hall

Committee Members:

Erik Perrins, Chair
Lingjia Liu
Shannon Blunt
Victor Frost
Bozenna Pasik-Duncan

Abstract

As it is becoming a significant part of our daily life, wireless mobile handsets have become faster and smarter. One of the main remaining requirement by users is to have a longer lasting wireless cellular devices. Many techniques have been used to increase the capacity of the battery (Ampere per Hour), but that increases the safety concern. 
Instead, it is better to have mobile handsets that consume less energy i.e increase energy efficiency. Therefore, in this research proposal, we study and analyze the radio 
frequency(RF) transceiver energy consumption, which is the largest energy consumed in the cellular device. We consider a model of large number of parameters in order to make it more realistic. First a transmitter energy of single antenna device is considered for a fixed target probability of error in the receiver for multilevel quadratic amplitude modulations (MQAM). It will be found that the power amplifier (PA) consumes the highest portion of transceiver energy due to the low efficiency of the PA.
Furthermore, when MQAM and raised cosine filter are used, the impact of peak to average ratio (PAR) on PA becomes another source of energy wasting in the PA. This issue is analyzed in this research proposal with a number of promising solutions. This analysis of energy consumption for single antenna devices will help us analyze the energy consumption of multiple antennas devices. In this regard, we discuss the energy efficiency of multiple input multiple output (MIMO) antenna with known channel state information (CSI) at the transmitter. However, the study of energy efficiency of MIMO without CSI using space time coding will be our next step. 


THEODORE LINDSEY

Interesting Rule Induction Module: Adding Support for Unknown Attribute Values

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Prasad Kulkarni


Abstract

IRIM (Interesting Rule Induction Module) is a rule induction system designed to induce particularly strong, simple rule sets. Additionally, IRIM does not require prior discretization of numerical attribute values. IRIM does not necessarily produce consistent rules that fully describe the target concepts, however, the rules induced by IRIM often lead to novel revelations of hidden relationships in a dataset. In this paper, we attempt to extend the IRIM system to be able to handle missing attribute values (in particular, lost and do-not-care attribute values) more thoroughly than ignoring the cases that they belong to. Further, we include an implementation of IRIM in the modern programming language Python that has been written for easy inclusion in within a Python data mining package or library. The provided implementation makes use of the Pandas module which is built on top of a C back end for quick performance relative to the performance normally found with Python.