Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

NILISHA MANE

Tools to Explore Run-time Program Properties

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Gary Minden


Abstract

The advancement in the field of embedded technology has resulted in its extensive use in almost all the modern electronic devices. Hence, unlike in the past, there is a very crucial need to develop system security tools for these devices. So far most of the research has been concentrated either on security for general computer systems or on static analysis of embedded systems. In this project, we develop tools that explore and monitor the run-time properties of programs/applications as well as the inter-process communication. We also present a case studies in which these tools are be used on a Gumstix (an embedded system) running Poky Linux system to monitor a particular program as well as print out a graph of all inter-process communication on the system.


BRIAN MACHARIA

UWB Microwave Filters on Multilayer LCP Substrates: A Feasibility Study

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Fernando Rodriguez-Morales
Chris Allen


Abstract

Having stable dielectric properties extending to frequencies over 110 GHz, Liquid Crystal Polymer (LCP) materials are a new and promising substrate alternative for low-cost production of planar microwave circuits. This project focused on the design of several microwave filter structures using multiple layers for operation in the 2-18 GHz and 10-14 GHz bands. Circuits were simulated and optimized using EDA tools, obtaining good results over the bands of interest. The results show that it is feasible to fabricate these structures on dielectric substrates compatible with off-site manufacturing facilities. It is likewise shown that LCP technology can yield a 3-5x area reduction as compared to cavity-type filters, making them much easier to integrate in a planar circuit.


Md. MOSHFEQUR RAHMAN

OpenFlow based Multipath Communication for Resilience

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li


Abstract

A cross-layer framework in the Software Defined Networking domain is pro- posed to study the resilience in OpenFlow-based multipath communication. A testbed has been built, using Brocade OpenFlow switches and Dell Poweredge servers. The framework is evaluated against regional challenges. By using differ- ent adjacency matrices, various topologies are built. The behavior of OpenFlow multipath-based communication is studied in case of a single path failure, splitting of traffic and also with multipath TCP enabled traffic. The behavior of different coupled congestion algorithms for MPTCP is also studied. A Web framework is presented to demonstrate the OpenFlow experiment by importing the network topologies and then executing and analyzing user defined regional attacks.


RAGAPRABHA CHINNASWAMY

A Comparison of Maximal Consistent Blocks and Characteristics Sets for Incomplete Data Sets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

One of the main applications of rough set theory is rule induction. If the input data set contains inconsistencies, using rough set theory leads to inducing certain and possible rule sets. 
In this project, the concept of a maximal consistent block is applied to formulate a new approximation to a concept in the incomplete data set with a higher level of accuracy. This method does not require change in the size of the original incomplete data set. Two interpretations of missing attribute values are discussed: lost values and “do not care” conditions. The main objective is to compare maximal consistent blocks and characteristics sets in terms of cardinality of lower and upper approximations. Four incomplete data sets are used for experiments with varying levels of missing information. The next objective is to compare the decision rules induced and cases covered by both techniques. The experiments show that the both techniques provide the same lower approximations for all the datasets with “do not care” conditions. The best results are achieved by maximal consistent blocks for upper approximations for three datasets and there is a tie for the other data set. 


PRAVEEN YARLAGADDA

A Comparison of Rule Sets Generated by Algorithms: AQ, C4.5, and CART

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Jim Miller


Abstract

In data mining, rules are the most popular symbolic representation of knowledge. Classification of data and extracting of classification rules from the data is a difficult process, and there are different approaches to this process. One such approach is inductive learning. Inductive learning involves the process of learning from examples - where a system tries to induce a set of rules from a set of observed examples. Inductive learning methods produce distinct concept descriptions when given identical training data and there are questions about the quality of the different rule sets produced. This project work is aimed at comparing and analyzing the rule sets induced by different inductive learning systems. In this project, three different algorithms AQ, CART and C4.5 are used to induce rule sets from different data sets. An analysis is carried out in terms of the total number of rules and the total number of conditions present in the rules. These space complexity measures such as rule count and condition count show that AQ tends to produce more complex rule sets than C4.5 and CART. AQ algorithm has been implemented as a part of project and is used to induce the rule sets.


DIVYA GUPTA

Investigation of a License Plate Recognition Algorithm

When & Where:


250 Nichols Hall

Committee Members:

Glenn Prescott, Chair
Erik Perrins
Jim Stiles


Abstract

License plate Recognition method is a technique to detect license plate numbers from the vehicle images. This method has become an important part of our life with an increase in traffic and crime every now and then. It uses computer vision and pattern recognition technologies. Various techniques have been proposed so far and they work best within boundaries.This detection technique helps in finding the accurate location of license plates and extracting characters of the plates. The license plate detection is a three-stage process that includes license plate detection, character segmentation and character recognition. The first stage is the extraction of the number plate as it occupies a small portion of the whole image. After tracking down the license plate, localizing of the characters is done. The character recognition is the last stage of the detection and template matching is the most common method used for it. The results achieved by the above experiment were quite accurate which showed the robustness of the investigated algorithm.


NAZMA KOTCHERLA

Hybrid Mobile and Responsive Web Application - KU Quick Quiz

When & Where:


2001B Eaton Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Jerzy Grzymala-Busse


Abstract

The objective of this project is to leverage the open source Angular JS, Node JS, and Ionic Framework along with Cordova to develop “A Hybrid Mobile Application” for students and “A Responsive Web Application” for professor to conduct classroom centered “Dynamic Tests”. Dynamic Tests are the test taking environments where questions can be posted to students in the form of quizzes during a classroom setup. Guided by the specifications set by the professor, students answer and submit the quiz from their mobile devices. The results are generated instantaneously after the completion of the test session and can be viewed by the professor. The web application performs statistical analysis of the responses by considering the factors that the professor had set to measure the students’ performance. This advanced methodology of test taking is highly beneficial as it gives a clear picture to the professor the level of understanding of all the students in any chosen topic immediately after the test. It helps to improvise the teaching methods. This is also very advantageous to students since it helps them to come out of their hesitation to clarify their doubts as their marks become the measure of their understanding which is directly uncovered before the professor. This application overall improves the classroom experience to help students gain higher standards.


JYOTHI PRASAD PANGLURI SREEHARINAIDU

Implementation of ChiMerge Algorithm for Discretization of Numerical Attributes

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Perry Alexander
Prasad Kulkarni


Abstract

Most of the present classification algorithms require the input data with discretized attributes. If the input data contains numerical attributes, we need to convert such attributes into discrete values (intervals) before performing classification. Discretization algorithms for real value attributes are very important for applications such as artificial intelligence and machine learning. In this project we discuss an implementation of the ChiMerge algorithm for discretization of numerical attributes, a robust algorithm, which uses X2 statistic to determine interval similarity as it constructs intervals in a bottom-up merging process. ChiMerge provides a reliable summarization of numerical attributes and determines the number of intervals. 


MOHAN KRISHNA VEERAMACHINENI

A Graphical User Interface System for Rule Visualization

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Prasad Kulkarni


Abstract

The primary goal of data visualization is to communicate information clearly and efficiently via statistical graphs, plots and information graphics. It makes complex data more accessible, understandable and usable. The goal of this project is to build a graphical user interface called RULEVIZ to visualize the rules, induced by LERS (Learning from Examples using Rough Set Theory) data mining system in the form of directed graphs. LERS is a technique used to induce a set of rules from examples given in the form of a decision table. Such rules are used to classify unseen data. The RULEVIZ is developed as a web application where the user uploads the rule set and the data set from which the rule set is visualized in the graphical format and is rendered on the web browser. Every rule is taken sequentially, and all the conditions of that rule are visualized as nodes connected by undirected edges. The last condition is connected to the concept by a directed edge. The RULEVIZ offers custom filtering options for the user to filter the rules based on factors like the number of conditions and conditional probability or strength. The RULEVIZ also has interactive capabilities to filter out rule sets and manipulate the generated graph for a better look and feel.


HARA MADHAV TALASILA

Modular Frequency Multiplier and Filters for the Global Hawk Snow Radar

When & Where:


317 Nichols Hall

Committee Members:

John Paden, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales

Abstract

Remote sensing with radar systems on airborne platforms is key for wide-area data collection to estimate the impact of ice and snow masses on rising sea levels. NASA P-3B and DC-8, as well as other platforms, successfully flew with multiple versions of the Snow Radar developed at CReSIS. Compared to these manned missions, the Global Hawk UAV can support flights with long endurance, complex flight paths and flexible altitude operation up to 70,000 ft. This thesis documents the process of adapting the 2-18 GHz Snow radar to meet the requirements for operation on manned and unmanned platforms from 700 ft to 70,000 ft. The primary focus of this work is the development of an improved microwave chirp generator implemented with frequency multipliers. The x16 frequency multiplier is composed of a series of x2 frequency multiplication stages, overcoming some of the limitations encountered in previous designs. At each stage, undesired harmonics are kept out of the band and filtered. The miniaturized design presented here reduces reflections in the chain, overall size, and weight as compared to the earlier large and heavy connectorized chain. Each stage is implemented by a drop-in type modular design operating at microwaves and millimeter waves; and realized with commercial surface-mount ICs, wire-bondable chips, and custom filters. DC circuits for power regulation and sequencing are developed as well. Another focus of this thesis is the development of band-pass filters using different distributed element filter technologies. Multiple edge-coupled band pass filters are fabricated on alumina substrate based on the design and optimization in computer-aided design (CAD) tools. Interdigital cavity filter models developed in-house are validated by full-wave EM simulation and measurements. Overall, the measured results of the modular frequency multiplier and filters match with the expected responses from original design and co-simulation outputs. The design files, test setups, and simulation models are generalized to use with any similar or new designs in the future.