Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

HAO XUE

Understanding Information Credibility in Social Networks

When & Where:


246 Nichols Hall

Committee Members:

Fengjun Li, Chair
Luke Huan
Prasad Kulkarni
Bo Luo
Hyunjin Seo

Abstract

With the advancement of Internet, increasing portions of people's social and communicative activities now take place in the digital world. The growth and popularity of online social networks have tremendously facilitate the online interaction and information exchange. More people now rely online information for news, opinions, and social networking. As the representative of online social-collaborative platforms, online review systems has enabled people to share information effectively and efficiently. A large volume of user generated content is produced daily, which allows people to make reasonable judgments about the quality of service or product of an unknown provider. However, the freedom and ease of of publishing information online has made these systems no longer the sources of reliable information. Not only does biased and misleading information exist, financial incentives drive individual and professional spammers to insert deceptive reviews to manipulate review rating and content. What's worse, advanced Artificial Intelligence has made it possible to generate realistic-looking reviews automatically. In this proposal, we present our work of measuring the credibility of information in online review systems. We first propose to utilize the social relationships and rating deviations to assist the computation of trustworthiness of users. Secondly, we propose a content-based trust propagation framework by extracting the opinions expressed in review content.  The opinion extraction approach we used was a supervised-learning based methods, which has flexibility limitations. Thus, we propose a enhanced framework that not only automates the opinion mining process, but also integrates social relationships with review content. Finally, we propose our study of the credibility of machine-generated reviews.


MOHAMMADREZA HAJIARBABI

A Face Detection and Recognition System for Color Images using Neural Networks with Boosting and Deep Learning

When & Where:


2001B Eaton Hall

Committee Members:

Arvin Agah, Chair
Prasad Kulkarni
Bo Luo
Richard Wang
Sara Wilson*

Abstract

A face detection and recognition system is a biometric identification mechanism which compared to other methods is shown to be more important both theoretically and practically. In principle, the biometric identification methods use a wide range of techniques such as machine learning, computer vision, image processing, pattern recognition and neural networks. A face recognition system consists of two main components, face detection and recognition. 
In this dissertation a face detection and recognition system using color images with multiple faces is designed, implemented, and evaluated. In color images, the information of skin color is used in order to distinguish between the skin pixels and non-skin pixels, dividing the image into several components. Neural networks and deep learning methods has been used in order to detect skin pixels in the image. In order to improve system performance, bootstrapping and parallel neural networks with voting have been used. Deep learning has been used as another method for skin detection and compared to other methods. Experiments have shown that in the case of skin detection, deep learning and neural networks methods produce better results in terms of precision and recall compared to the other methods in this field. 
The step after skin detection is to decide which of these components belong to human face. A template based method has been modified in order to detect the faces. The designed algorithm also succeeds if there are more than one face in the component. A rule based method has been designed in order to detect the eyes and lips in the detected components. After detecting the location of eyes and lips in the component, the face can be detected.
After face detection, the faces which were detected in the previous step are to be recognized. Appearance based methods used in this work are one of the most important methods in face recognition due to the robustness of the algorithms to head rotation in the images, noise, low quality images, and other challenges. Different appearance based methods have been designed, implemented and tested. Canonical correlation analysis has been used in order to increase the recognition rate.


JASON GEVARGIZIAN

Automatic Measurement Framework: Expected Outcome Generation and Measurer Synthesis for Remote Attestation

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Arvin Agah
Perry Alexander
Andy Gill
Kevin Leonard

Abstract

A system is said to be trusted if it can be unambiguously identified and observed as behaving in accordance with expectations. Remote attestation is a mechanism to establish trust in a remote system.
Remote attestation requires measurement systems that can sample program state from a wide range of applications, each of which with different program features and expected behavior. Even in cases where applications are similar in purpose, differences in attestation critical structures and program variables render any one measurer incapable of sampling multiple applications. Furthermore, any set of behavioral expectations vague enough to match multiple applications would be too weak to serve as a rubric to establish trust in any one of them. As such, measurement functionality must be tailored to each and every critical application on the target system.
Establishing behavioral expectations and customizing measurement systems to gather meaningful data to evidence said expectations is difficult. The process requires an expert, typically the application developer or a motivated appraiser, to analyze the application's source in order to detail program behavioral expectations critical for establishing trust and to identify critical program structures and variables that can be sampled to evidence said trust. This effort required to customize measurement systems manually prohibits widespread adoption of remote attestation in trusted computing.
We propose automatic generation of expected outcomes and synthesis of measurement policies for a configurable general purpose measurer to enable large scale adoption of remote attestation for trusted computing. As such, we mitigate the cost incurred by existing systems that require manual measurement specification and design by an expert sufficiently skilled and knowledgeable regarding the target application and the methods for evidencing trust in the context of remote attestation.


SALLY SAJADIAN

Model Predictive Control of Impedance Source Inverter for Photovoltaic Applications

When & Where:


2001B Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang

Abstract

A model predictive controlled power electronics interface (PEI) based on impedance source inverter for photovoltaic (PV) applications is proposed in this work. The proposed system has the capability of operation in both grid-connected and islanded mode. Firstly, a model predictive based maximum power point tracking (MPPT) method is proposed for PV applications based on single stage grid-connected Z-source inverter (ZSI). This technique predicts the future behavior of the PV side voltage and current using a digital observer that estimates the parameters of the PV module. Therefore, by predicting a priori the behavior of the PV module and its corresponding effects on the system, it improves the control efficacy. The proposed method adaptively updates the perturbation size in the PV voltage using the predicted model of the system to reduce oscillations and increase convergence speed. The operation of the proposed method is verified experimentally. The experimental results demonstrate fast dynamic response to changes in solar irradiance level, small oscillations around maximum power point at steady-state, and high MPPT effectiveness from low to high solar irradiance level. The second part of this work focuses on the dual-mode operation of the proposed PEI based on ZSI with capability to operate in islanded and grid-connected mode. The transition from islanded to grid-connected mode and vice versa can cause significant deviation in voltage and current due to mismatch in phase, frequency, and amplitude of voltages. The proposed controller using MPC offers seamless transition between the two modes of operations. The main predictive controller objectives are direct decoupled power control in grid-connected mode and load voltage regulation in islanded mode. The proposed direct decoupled active and reactive power control in grid connected mode enables the dual-mode ZSI to behave as a power conditioning unit for ancillary services such as reactive power compensation. The proposed controller features simplicity, seamless transition between modes of operations, fast dynamic response, and small tracking error in steady state condition of controller objectives. The operation of the proposed system is verified experimentally.


YI JIA

Online Spectral Clustering on Network Streams

When & Where:


December 10, 2012

Committee Members:

Luke Huan, Chair
Swapan Chakrabarti
Jerzy Grzymala-Busse
Bo Luo
Alfred Tat-Kei Ho

Abstract

Graph is an extremely useful representation of a wide variety of practical systems in data analysis. Recently, with the fast accumulation of stream data from various type of networks, significant research interests have arisen on spectral clustering for network streams (or evolving networks). Compared with the general spectral clustering problem, the data analysis of this new type of problems may have additional requirements, such as short processing time, scalability in distributed computing environments, and temporal variation tracking. 

However, to design a spectral clustering method to satisfy these requirements certainly presents non-trivial efforts. There are three major challenges for the new algorithm design. The first challenge is online clustering computation. Most of the existing spectral methods on evolving networks are off-line methods, using standard eigensystem solvers such as the Lanczos method. It needs to re-compute solutions from scratch at each time point. The second challenge is the parallelization of algorithms. To parallelize such algorithms is non-trivial since standard eigen solvers are iterative algorithms and the number of iterations cannot be predetermined. The third challenge is the very limited existing work. In addition, there exists multiple limitations in the existing method, such as computational inefficiency on large similarity changes, the lack of sound theoretical basis, and the lack of effective way to handle accumulated approximate errors and large data variations over time. 

In this thesis, we proposed a new online spectral graph clustering approach with a family of three novel spectrum approximation algorithms. Our algorithms incrementally update the eigenpairs in an online manner to improve the computational performance. Our approaches outperformed the existing method in computational efficiency and scalability while retaining competitive or even better clustering accuracy. We derived our spectrum approximation techniques GEPT and EEPT through formal theoretical analysis. The well-established matrix perturbation theory forms a solid theoretic foundation for our online clustering method. In addition, we discussed our preliminary work on approximate graph mining with evolutionary process, non-stationary Bayesian Network structure learning from non-stationary time series data, and Bayesian Network structure learning with text priors imposed by non-parametric hierarchical topic modeling.


HAYDER ALMOSA

Downlink Achievable Rate Analysis for FDD Massive MIMO Systems

When & Where:


250 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Shannon Blunt
Ron Hui
Erik Perrins
Hongyi Cai

Abstract

Multiple-Input Multiple-Output (MIMO) systems with large-scale transmit antenna arrays, often called massive MIMO, is a very promising direction for 5G due to its ability to increase capacity and enhance both spectrum and energy efficiency. To get the benefit of massive MIMO system, accurate downlink channel state information at the transmitter (CSIT) is essential for downlink beamforming and resource allocation. Conventional approaches to obtain CSIT for FDD massive MIMO systems require downlink training and CSI feedback. However, such training will cause a large overhead for massive MIMO systems because of the large dimensionality of the channel matrix. In this research proposal, we propose an efficient downlink beamforming method to address the challenging of downlink training overhead. First, we design an efficient downlink beamforming method based on partial CSI. By exploiting the relationship between uplink (UL) DoAs and downlink (DL) DoDs, we derive an expression for estimated downlink DoDs, which will be used for downlink beamforming. Second, we derive an efficient downlink beamforming method based on downlink CSIT estimated at the BS. By exploiting the sparsity structure of downlink channel matrix, we develop an algorithm that select the best features from the measurement matrix to obtain efficient CSIT acquisition that can reduce the downlink training overhead compared with the conventional LS/MMSE estimators. In both cases, we compare the performance of our proposed beamforming method with traditional method in terms of downlink achievable rate and simulation results show that our proposed method outperform the traditional beamforming method.


ANDREW OZOR

Size Up: A Tool for Interactive Comparative Collection Analysis for Very Large Species Collections

When & Where:


2001B Eaton Hall

Committee Members:

Jim Miller, Chair
Man Kong
Brian Potetz


Abstract


BRYAN BANZ

A Framework for Model Development Using Dimension Reduction and Low-Cost Surrogate Functions

When & Where:


2001B Eaton Hall

Committee Members:

James Miller, Chair
Arvin Agah
Jerzy Grzymala-Busse
Nancy Kinnersley
John Doveton*

Abstract


SUSANNA MOSLEH

Intelligent Interference Mitigation for Multi-cell Multi-user MIMO Networks with Limited Feedback

When & Where:


250 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Victor Frost
Ron Hui
Erik Perrins
Jian Li

Abstract

Nowadays, wireless communication are becoming so tightly integrated in our daily lives, especially with the global spread of laptops, tablets and smartphones. This has paved the way to dramatically increasing wireless network dimensions in terms of subscribers and amount of flowing data. With the rapidly growing data traffic, interference has become a major limitation in wireless networks. To deal with this issue and in order to increase the spectral efficiency of wireless networks, various interference mitigation techniques have been suggested among which interference alignment (IA) has been shown to significantly improve network performance. However, how to practically use IA to mitigate inter-cell interference in a downlink multi-cell multi-user MIMO networks still remains an open problem. Besides, more recently, the attention of researchers has been drawn to a new technique for improving the spectral efficiency, namely, massive/full dimension multiple-input multiple-output. Although massive MIMO/FD-MIMO brings a large diversity gain to the network, its practical implementation poses a research challenge. Moreover, new techniques that can mitigate interference impact in such systems remain unexplored. To address these challenges, this proposed research targets to 1) develop an IA technique for downlink multi-cell multi-user MIMO networks; 2) mathematically characterize the performance of IA with limited feedback; and 3) evaluate the performance analysis of IA technique (with/without limited feedback) in massive MIMO/FD-MIMO networks. Preliminary results show that IA with limited feedback significantly increase the spectral efficiency of downlink multi-cell multi-user MIMO networks.


RACHAD ATAT

Enabling Cyber-Physical Communication in 5G Cellular Networks: Challenges, Solutions and Applications

When & Where:


246 Nichols Hall

Committee Members:

Lingjia Liu, Chair; Yang Yi, Co-Chair , Chair
Shannon Blunt
Jim Rowland
James Sterbenz
Jin Feng

Abstract

Cyber-physical systems (CPS) are expected to revolutionize the world through a myriad of applications in health-care, disaster event applications, environmental management, vehicular networks, industrial automation, and so on. The continuous explosive increase in wireless data traffic, driven by the global rise of smartphones, tablets, video streaming, and online social networking applications along with the anticipated wide massive sensors deployments, will create a set of challenges to network providers, especially that future fifth generation (5G) cellular networks will help facilitate the enabling of CPS communications over current network infrastructure. 
In this dissertation, we first provide an overview of CPS taxonomy along with its challenges from energy efficiency, security, and reliability. Then we present different tractable analytical solutions through different 5G technologies, such as device-to-device (D2D) communications, cell shrinking and offloading, in order to enable CPS traffic over cellular networks. These technologies also provide CPS with several benefits such as ubiquitous coverage, global connectivity, reliability and security. By tuning specific network parameters, the proposed solutions allow the achievement of balance and fairness in spectral efficiency and minimum achievable throughout among cellular users and CPS devices. To conclude, we present a CPS mobile-health application as a case study where security of the medical health cyber-physical space is discussed in details.