Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

ARNESH BOSE

Two-Stage Operational Amplifier using MOSFET CMOS Technology

When & Where:


2001B Eaton Hall

Committee Members:

Yang Yi, Chair
Ron Hui
Jim Stiles


Abstract

The operational amplifier is perhaps the most useful integrated device in existence today. It is widely used in analogue computers simulation systems and in a variety of electronic applications such as amplification filtering, buffering and comparison of signed levels. In this design project, we use the operational amplifier for amplification. Two-stage opamp is one of the most commonly used opamp architectures. A two stage differential amplifier is designed with an objective of a minimum gain of 65 dB. The gain achieved is 74.6 dB and 71.4 MHz 3dB gain bandwidth, which is useful for medium frequency operations. The schematic circuit is constructed using Metal Oxide Semiconductor Field Effect Transistor and the technology used for the final layout is Complementary metal–oxide–semiconductor (CMOS) using Cadence.


ISHA KHADKA

Multi-Controller SDN for Fault-Tolerant Resilient Network

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Fengjun Li
Gary Minden


Abstract

Software Defined Networking (SDN) decouples the control or logical plane of a network from its physical/data plane thus enabling features such as centralized control, network programmability, virtualization, network application development, automation and more. However, SDN is still vulnerable to attacks and failures just like any other non-SDN network. The failure in SDN can be either a link or device failure. Controller is the central device, acting like the brain of a network, and its failure can propagate rapidly rendering the underlying data plane dysfunctional. The concept of Multi-Controller SDN uses redundancy as an effective method to ensure resilience and fault-tolerance in a Software-Defined Network. Multiple Controllers are connected in a cluster to form a physically distributed but logically centralized network. The backup controllers ensure resilience against failure, attack, disaster and other network disruptions. In this project, we implement multi-controller SDN and measure performance metrics such as high availability, reliability, latency, datastore persistency and failure recovery time in a clustered environment.


MD AMIMUL EHSAN

Enabling Technologies for Three-dimensional (3D) Integrated Circuits (ICs): Through Silicon Via (TSV) Modeling and Analysis

When & Where:


246 Nichols Hall

Committee Members:

Yang Yi, Chair
Chris Allen
Ron Hui
Lingjia Liu
Judy Wu

Abstract

Three-dimensional (3D) integrated circuits (ICs) offer a promising near-term solution for pushing beyond Moore’s Law because of their compatibility with current technology. Through silicon vias (TSVs) provide electrical connections that pass vertically through wafers or dies to generate high-performance interconnects, which allows for higher design densities through shortened connection lengths. In recent years, we have seen tremendous technological and economic progress in adoption of 3D ICs with TSVs for mainstream commercial use. 
Along with the need for low-cost and high-yield process technology, the successful application of TSV technology requires further optimization of the TSV electrical modeling and design. In the millimeter wave (mmW) frequency range, the root mean square (rms) height of the through silicon via (TSV) sidewall roughness is comparable to the skin depth and hence becomes a critical factor for TSV modeling and analysis. The impact of TSV sidewall roughness on electrical performance, such as the loss and impedance alteration in the mmW frequency range, is examined and analyzed. The second order small analytical perturbation method is applied to obtain a simple closed-form expression for the power absorption enhancement factor of the TSV. In this study, we propose an accurate and efficient electrical model for TSVs which considers the TSV sidewall roughness effect, the skin effect, and the metal oxide semiconductor (MOS) effect. The accuracy of the model is validated through a comparison of circuit model behavior for full wave electromagnetic field simulations up to 100 GHz. 
Another advanced neurophysiological computing system that can incorporate 3D integration could provide massive parallelism with fast and energy efficient links. While the 3D neuro-inspired system offers a fantastic level of integration, it becomes inordinately arduous for the designer to model, merely because of the innumerable interconnected elements. When a TSV array is utilized in a 3D neuromorphic system, crosstalk has a malefic effect upon the system’s signal to noise ratio; the result is an overall deterioration of system performance. To countervail the crosstalk, we propose a novel optimized TSV array pattern by applying the force directed optimization algorithm. 


ADAM PETZ

A Semantics for Attestation Protocols using Session Types in Coq

When & Where:


246 Nichols Hall

Committee Members:

Perry Alexander, Chair
Andy Gill
Prasad Kulkarni


Abstract

As our world becomes more connected, the average person must place more trust in cloud systems for everyday transactions. We rely on banks and credit card services to protect our money, hospitals to conceal and selectively disclose sensitive health information, and government agencies to protect our identity and uphold national security interests. However, establishing trust in remote systems is not a trivial task, especially in the diverse, distributed ecosystem of todays networked computers. Remote Attestation is a mechanism for establishing trust in a remotely running system where an appraiser requests information from a target that can be used to evaluate its operational state. The target responds with evidence providing configuration information, run-time measurements, and authenticity meta-evidence used by the appraiser to determine if it trusts the target system. For Remote Attestation to be applied broadly, we must have attestation protocols that perform operations on a collection of applications, each of which must be measured differently. Verifying that these protocols behave as expected and accomplish their diverse attestation goals is a unique challenge. An important first step is to understand the structural properties and execution patterns they share. In this thesis I present a semantic framework for attestation protocol execution within the Coq verification environment including a protocol representation based on Session Types, a dependently typed model of perfect cryptography, and an operational execution semantics. The expressive power of dependent types constrains the structure of protocols and supports precise claims about their behavior. If we view attestation protocols as programming language expressions, we can borrow from standard language semantics techniques to model their execution. The proof framework ensures desirable properties of protocol execution, such as progress and termination, that hold for all protocols. It also ensures properties of authenticity and secrecy for individual protocols.


RACHAD ATAT

Communicating over Internet Things: Security, Energy-Efficiency, Reliability and Low-Latency

When & Where:


250 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Yang Yi
Shannon Blunt
Jim Rowland
David Nualart

Abstract

The Internet of Things (IoT) is expected to revolutionize the world through its myriad applications in health-care, public safety, environmental management, vehicular networks, industrial automation, etc. Some of the concepts related to IoT include Machine Type Communications (MTC), Low power Wireless Personal Area Networks (LoWPAN), wireless sensor networks (WSN) and Radio-Frequency Identification (RFID). Characterized by large amount of traffic with smart decision making with little or no human interaction, these different networks pose a set of challenges, among which security, energy, reliability and latency are the most important ones. First, the open wireless medium and the distributed nature of the system introduce eavesdropping, data fabrication and privacy violation threats. Second, the large number of IoT devices are expected to operate in a self-sustainable and self-sufficient manner without degrading system performance. That means energy efficiency is critical to prolong devices' lifetime. Third, many IoT applications require the information to be successfully transmitted in a reliable and timely manner, such as emergency response and health-care scenarios. To address these challenges, we propose low-complexity approaches by exploiting the physical layer and using stochastic geometry as a powerful tool to accurately model the spatial locations of ''things''. This helps provide a tractable analytical framework to provide solutions for the mentioned challenges of IoT.


OMAR BARI

Ensembles of Text and Time-Series Models for Automatic Generation of Financial Trading Signals

When & Where:


2001B Eaton Hall

Committee Members:

Arvin Agah, Chair
Joseph Evans
Andy Gill
Jerzy Grzymala-Busse
Sara Wilson

Abstract

Event Studies in finance have focused on traditional news headlines to assess the impact an event has on a traded company. The increased proliferation of news and information produced by social media content has disrupted this trend. Although researchers have begun to identify trading opportunities from social media platforms, such as Twitter, almost all techniques use a general sentiment from large collections of tweets. Though useful, general sentiment does not provide an opportunity to indicate specific events worthy of affecting stock prices.


AQSA PATEL

Interpretation of Radar Altimeter Waveforms using Ku-band Ultra-Wideband Altimeter Data

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Prasad Kulkarni
Ron Hui
John Paden
David Braaten

Abstract

The surface-elevation of ice sheets and sea ice is currently measured using both satellite and airborne radar altimeters. These measurements are used for generating mass balance estimates of ice sheets and thickness estimates of sea ice. However, due to the penetration of the altimeter signal into the snow there is ambiguity between the surface tracking point and the actual surface location which produces errors in the surface elevation measurement. In order to address how the penetration of the signal affects the shape of the return waveform, it is important to study the effect sub-surface scattering and seasonal variations in properties of snow have on the return waveform to correctly interpret the satellite radar altimeter data. To address this problem, an ultra-wide bandwidth Ku-band radar altimeter was developed at the Center for Remote Sensing of Ice Sheets (CReSIS). The Ku-band altimeter operates over the frequency range of 12 to 18 GHz providing very fine resolution to measure ice surface and resolve the sub-surface features of the snow. It is designed to encompass the frequency band of satellite radar altimeters. The data from Ku-band altimeter can be used to simulate satellite radar altimeter data, and these simulated waveforms can help us understand the effect of signal penetration and sub-surface scattering on low bandwidth satellite altimeter returns. The extensive dataset collected as a part of the Operation Ice Bridge (OIB) campaign can be used to interpret satellite radar altimeter data over surfaces with varying snow conditions. The goal of this research is to use waveform modeling and data inter-comparisons of full and reduced bandwidth data products from Ku-band radar altimeter to investigate the effect of signal penetration and snow conditions on surface tracking using threshold and waveform fitting retracking algorithms to improve the retrieval of surface elevation from satellite radar altimeters.


VAISHNAVI YADALAM

Real Time Video Streaming over a Multihop Ad Hoc Network

When & Where:


1 Eaton Hall

Committee Members:

Aveek Dutta, Chair
Victor Frost
Richard Wang


Abstract

High rate data transmission is very common in cellular and wireless local area networks. It is achievable because of its wired backbone where only the first or the last hop is wireless, commonly known as wireless “last-mile” link. With this type of infrastructure network, it is not surprising to achieve the desired performance of wirelessly-transmitted video. However, the current challenge is to transmit an enunciated and a high quality real time video over multiple wireless hops in an ad hoc network. The performance of multiple wireless hops to transmit a high quality video is limited by data rate, bandwidth of wireless channel and interference from adjacent channels. These factors constrain the applications for a wireless multihop network but are fundamental to military tactical network solutions. The project addresses and studies the effect of packet sensitivity, latency, bitrate and bandwidth on the quality of video for line of sight and non-line of sight test scenarios. It aims to achieve the best visual user experience at the receiver end on transmission over multiple wireless hops. Further, the project provides an algorithm for placement of drones in sub-terrain environment to stream real time videos for border surveillance to monitor and detect unauthorized activity.


YANG TIAN

Integrating Textual Ontology and Visual Features for Content Based Search in an Invertebrate Paleontology Knowledgebase

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Fengjun Li
Richard Wang


Abstract

The Treatise on Invertebrate Paleontology (TIP) is a definitive work completed by more than 300 authors in the field of Paleontology, covering all categories of invertebrate animals. The digital version for TIP is consisted of multiple PDF files, however, these files are just a clone of paper version and are not well formatted, which makes it hard to extract structured data using only straightforward methods. In order to make fossil and extant records in TIP organized and searchable from a web interface, a digital library which is called Invertebrate Paleontology Knowledgebase (IPKB) was built for information sharing and querying in TIP. It is consisted of a database which stores records of all fossils and extant invertebrate animals, and a web interface which provides an online access. 
The existing IPKB system provides a general framework for TIP information showing and searching, however, it has very limited search functions, only allowing users querying by pure text. Details of structural properties in the fossil descriptions are not carefully taken into consideration. Moreover, sometimes users cannot provide correct and rich enough query terms. Although authors of TIP are all paleontologists, the expected users of IPKB may not be that professional. 
In order to overcome this limitation and bring more powerful search features into the IPKB system, in this thesis, we present a content-based search function, which allow users to search using textual ontology descriptions and images of fossils. First, this thesis describes the work done by previous research on IPKB system. Except for the original text and image processing approaches, we also present our new efforts on improving these original methods. Second, this thesis presents the algorithm and approach adopted in the construction of content-based search system for IPKB. The search functions in the old IPKB system did not consider the differences among morphological details of certain regions of fossils. Three major parts are discussed in detail: (1) Textual ontology based search. (2) Image based search. (3) Text-image based search. 


ANIL PEDIREDLA

Information Revelation and Privacy in Online Social Networks

When & Where:


250 Nichols Hall

Committee Members:

Bo Luo, Chair
Fengjun Li
Richard Wang


Abstract

Participation in social networking sites has dramatically increased in recent years. Services such as Linkedin, Facebook, or Twitter allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. The relation between privacy and a person’s social network is multi-faced. At certain occasions we want information about ourselves to be know only to a limited set of people, and not to strangers. Privacy implications associated with online social networking depend on the level of identifiability of the information provided, its possible recipients, and its possible uses. Even social networking websites that do not openly expose their users’ identities may provide enough information to identify profile’s owner.