Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

JAISNEET BHANDAL

Classification of Private Tweets using Tweets Content

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Jerzy Grzymala-Busse
Prasad Kulkarni


Abstract

Online social networks (OSNs) like Twitter provide an open platform for users to easily convey their thoughts and ideas from personal experiences to breaking news. With the increasing popularity of Twitter and the explosion of tweets, we have observed large amounts of potentially sensitive/private messages being published to OSNs inadvertently or voluntarily. The owners of these messages may become vulnerable to online stalkers or adversaries, and they often regret posting such messages. Therefore, identifying tweets that reveal private/sensitive information is critical for both the users and the service providers. However, the definition of sensitive information is subjective and different from person to person. To develop a privacy protection mechanism that is customizable to fit the needs of diverse audiences, it is essential to accurately and automatically identify and classify potentially sensitive tweets. 
In this project, we adopted a two-step approach - private tweet identification, and private tweet classification. We make the first attempt to classify private tweets into two main categories, sensitive and nonsensitive - private tweet identification, followed by private tweet classification where we categorize the sensitive tweets into 13 pre-defined topics. We consider private tweet identification and private tweet classification as dual-problems. Progress towards one of them will eventually benefit the other. We used a 2-layer classification approach, where we explore different combinations of classifiers, and analyze the performance of each combination. 


JONATHAN LYLE

A Digital Approach to Bistatic Radar Synchronization via GPS PPS

When & Where:


246 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Chris Allen
Jilu Li


Abstract

Bistatic Radar systems utilize physically separate transmit and receive systems to collect information that monostatic systems cannot. One issue with developing bisatic systems is guaranteeing synchronization between the transmitters and receivers. This project presents a purely digital method for improving synchronization of a bistatic system based on the GPS PPS signal, and using step-time for both transmitter and receiver timing. The issue of bistatic synchronization is simulated in Matlab and then modified to utilize the proposed step-time adjustment to show that the method works in theory. This method is then implemented in hardware on the digital system of CReSIS’s ‘HF Sounder’ radar system, and then tested to verify that the proposed method can be implemented in hardware and that it improves performance.


TYLER WADE

AOT Vs. JIT: Impact of Profile Data on Code Quality

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Heechul Yun


Abstract

Just-in-time (JIT) compilation during program execution and 
ahead-of-time (AOT) compilation during software installation are 
alternate techniques used by managed language virtual machines 
(VM) to generate optimized native code while simultaneously 
achieving binary code portability and high execution performance. 
JIT compilers typically collect profile information at run-time to 
enable profile-guided optimizations (PGO) to customize the gener- 
ated native code to different program inputs/behaviors. AOT com- 
pilation removes the speed and energy overhead of online profile 
collection and dynamic compilation, but may not be able to achieve 
the quality and performance of customized native code. The goal 
of this work is to investigate and quantify the implications of the 
AOT compilation model on the quality of the generated native code 
for current VMs. 
First, we quantify the quality of native code generated by the 
two compilation models for a state-of-the-art (HotSpot) Java VM. 
Second, we determine how the amount of profile data collected af- 
fects the quality of generated code. Third, we develop a mechanism 
to determine the accuracy or similarity of different profile data for a 
given program run, and investigate how the accuracy of profile data 
affects its ability to effectively guide PGOs. Finally, we categorize 
the profile data types in our VM and explore the contribution of 
each such category to performance. 


LOHITH NANUVALA

An Implementation of the MLEM2 Algorithm

When & Where:


1 Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang


Abstract

Data mining is the process of finding meaningful information from data. Data mining can be used in several areas like business, medicine, education etc. It allows us to find patterns in the data and make predictions for the future. One form of data mining is to extract rules from data sets. In this project we discuss an implementation of one of the data mining algorithms called MLEM2 (Modified Learning from Examples Module, version 2). This algorithm uses the concept of blocks of attribute-value pairs. It is also robust and generates rules for both complete and incomplete data sets with numeric and symbolic attributes. A rule checker has been developed which is used to evaluate the rule sets produced by MLEM2. The accuracy of the rules is measured by computing the error rate which is the ratio of the number of incorrectly classified cases to the total number of all cases. Experiments are conducted on different kinds of data sets (complete, incomplete, numeric and symbolic) using 10-fold cross validation method.


ASHWINI BALACHANDRA

Implementation of Truncated Lévy Walk Mobility Model in ns-3

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li


Abstract

Mobility models generate the mobility patterns of the nodes in a given system. Mobility models help us to analyze and study the characteristic of new and existing systems. Various mobility models implemented in network simulation tools like ns-3 does not model the patterns of human mobility. The main idea of this project is to implement the truncated Lévy walk mobility model in ns-3. The model has two variations, in the first variation, the flight length and pause time of the nodes are determined from the truncated Pareto distribution and in the second variation, Lévy distribution models the flight length and pause time distributions and the values are obtained by Lévy α-stable random number generator. The mobility patterns of the nodes are generated and analyzed for the model by changing various model attributes. Further studies can be done to understand the behavior of these models for different ad hoc networking protocols.


PAVAN KUMAR MOTURU

Image Processing Techniques in Matlab GUI

When & Where:


246 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Chris Allen
Fernando Rodriguez-Morales


Abstract

Identifying missing bed in radar data is very important in sea level changes. Increase in sea level is a problem of global importance because of its impact on infrastructure. Ice sheets in the Greenland and Antarctic are melting and increasing their contribution to sea level change over the last decade. Measuring ice sheets thickness is required to estimate sea level rise. We need to use several algorithms, pre-defined functions to extract the weak bed echoes, but we don’t have a tool in Matlab which contains some important algorithms like ImageJ. We can’t process all the data in ImageJ as Matlab produces better results compared to ImageJ as some of the functions like window and symmetric selection around center in FFT domain are not implemented in ImageJ. 
In this project, we will investigate the application of some image processing techniques using a GUI developed for analyzing ice sounding radargrams. One key advantage of the tool is that the image processing techniques are applied in a single GUI instead of doing separately. We apply these techniques on the data which came after applying extensive signal processing techniques. After performing these techniques, we compare the processed data with the original data. 


ASHWINI BALACHANDRA

Implementation of Truncated Lévy Walk Mobility Model in ns-3

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li


Abstract

Mobility models generate the mobility patterns of the nodes in a given system. Mobility models help us to analyze and study the characteristic of new and existing systems. Various mobility models implemented in network simulation tools like ns-3 does not model the patterns of human mobility. The main idea of this project is to implement the truncated Lévy walk mobility model in ns-3. The model has two variations, in the first variation, the flight length and pause time of the nodes are determined from the truncated Pareto distribution and in the second variation, Lévy distribution models the flight length and pause time distributions and the values are obtained by Lévy α-stable random number generator. The mobility patterns of the nodes are generated and analyzed for the model by changing various model attributes. Further studies can be done to understand the behavior of these models for different ad hoc networking protocols. 

 

 


MOHSEN ALEENEJAD

New Modulation Methods and Control Strategies for Power Converters

When & Where:


1 Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang

Abstract

The DC to AC power Inverters (so-called Inverters) are widely used in industrial applications. The multilevel Inverters are becoming increasingly popular in industrial apparatus aimed at medium to high power conversion applications. In comparison to the conventional inverters, they feature superior characteristics such as lower total harmonic distortion (THD), higher efficiency, and lower switching voltage stress{Malinowski, 2010 #9}{Malinowski, 2010 #9}. Nevertheless, the superior characteristics come at the price of a more complex topology with an increased number of power electronic switches. As a general rule in a Inverter topology, as the number of power electronic switches increases, the chances of fault occurrence on of the switches increases, and thus the Inverter’s reliability decreases. Due to the extreme monetary ramifications of the interruption of operation in commercial and industrial applications, high reliability for power Inverters utilized in these sectors is critical. As a result, developing fault-tolerant operation schemes for multilevel Inverters has always been an interesting topic for researchers in related areas. The purpose of this proposal is to develop new control and fault-tolerant strategies for the multilevel power Inverter. In the event of a fault, the line voltages of the faulty Inverters are unbalanced and cannot be applied to the three phase loads. This fault-tolerant strategy generates balanced line voltages without bypassing any healthy and operative Inverter element, makes better use of the Inverter capacity and generates higher output voltage. This strategy exploits the advantages of the Selective Harmonic Elimination (SHE) method in conjunction with a slightly modified Fundamental Phase Shift Compensation technique to generate balanced voltages and manipulate voltage harmonics at the same time. However, due to the distinctive requirement of the strategy to manipulate both amplitude and angle of the harmonics, the conventional SHE technique is not the suitable basis for the proposed strategy. Therefore, in this project a modified Unbalanced SHE technique which can be used as the basis for the fault-tolerant strategy is developed. The proposed strategy is applicable to several classes of multilevel Inverters with three or more voltage levels. 


MOHSEN ALEENEJAD

New Modulation Methods and Control Strategies for Power Converters

When & Where:


1 Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang

Abstract

The DC to AC power Inverters (so-called Inverters) are widely used in industrial applications. The multilevel Inverters are becoming increasingly popular in industrial apparatus aimed at medium to high power conversion applications. In comparison to the conventional inverters, they feature superior characteristics such as lower total harmonic distortion (THD), higher efficiency, and lower switching voltage stress{Malinowski, 2010 #9}{Malinowski, 2010 #9}. Nevertheless, the superior characteristics come at the price of a more complex topology with an increased number of power electronic switches. As a general rule in a Inverter topology, as the number of power electronic switches increases, the chances of fault occurrence on of the switches increases, and thus the Inverter’s reliability decreases. Due to the extreme monetary ramifications of the interruption of operation in commercial and industrial applications, high reliability for power Inverters utilized in these sectors is critical. As a result, developing fault-tolerant operation schemes for multilevel Inverters has always been an interesting topic for researchers in related areas. The purpose of this proposal is to develop new control and fault-tolerant strategies for the multilevel power Inverter. In the event of a fault, the line voltages of the faulty Inverters are unbalanced and cannot be applied to the three phase loads. This fault-tolerant strategy generates balanced line voltages without bypassing any healthy and operative Inverter element, makes better use of the Inverter capacity and generates higher output voltage. This strategy exploits the advantages of the Selective Harmonic Elimination (SHE) method in conjunction with a slightly modified Fundamental Phase Shift Compensation technique to generate balanced voltages and manipulate voltage harmonics at the same time. However, due to the distinctive requirement of the strategy to manipulate both amplitude and angle of the harmonics, the conventional SHE technique is not the suitable basis for the proposed strategy. Therefore, in this project a modified Unbalanced SHE technique which can be used as the basis for the fault-tolerant strategy is developed. The proposed strategy is applicable to several classes of multilevel Inverters with three or more voltage levels.


SIVA RAM DATTA BOBBA

Rule Induction For Numerical Data using PRISM

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
James Miller


Abstract

Rule induction is one of the basic and important techniques of data mining. Inducing a rule set for symbolic data is simple and straightforward, but it becomes complex when the attributes are numerical. There are several algorithms available that do the task of rule induction for symbolic data. One such algorithm is PRISM which uses conditional probability for attribute-value selection to induce a rule. 
In the real world scenario, data may comprise of either symbolic or numerical attributes. It becomes difficult to induce a discriminant ruleset on the data with numerical attributes. This project provides an implementation of PRISM to handle numerical data. First, it takes as input, a dataset with numerical attributes and converts them into discrete values using the multiple scanning approach which identifies the cut-points for intervals using minimum conditional entropy. Once discretization completes, PRISM uses these discrete values to induce ruleset for each decision. Thus, this project helps to induce modular rulesets over a numerical dataset.