Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

SUNDEEP GANJI

A Hybrid Web Application For Conducting In Class Quizzes

When & Where:


1415A LEEP2

Committee Members:

Prasad Kulkarni, Chair
Jerzy Grzymala-Busse
Gary Minden


Abstract

Every student comes to the class with a smart phone, and they are constantly distracted. It has become a tough challenge for the instructors to keep the students focused on the lectures. The idea of this project is to build a hybrid responsive web application which helps the instructors to post questions between their discussions. The students can give their responses through their smart phones instantly. This enables the instructor to analyze the understanding of the students on the current topic through various statistics which are generated instantly. The instructors can improve their teaching methods while the students who are less interactive can give their voice along with others in the class and check their understanding. 

This application allows the instructor to add or edit courses in their account, add students to their courses, create or edit quizzes beforehand, post questions in different formats to the students, and analyze results through various kinds of plots. On the otherhand, a Student can view the courses he is added in to by his/her instructor, submit his/her responses for the quizzes posted. This application simplifies the process of conducting in-class quizzes and offers the students and the instructors an enhanced classroom experience. 


ALI MAHMOOD

Design, Integration, and Deployment of UAS-borne HF/VHF Ice Depth Sounding Radar and Antenna System

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Fernando Rodriguez-Morales
Chris Allen


Abstract

The dynamic thinning of fast-flowing glaciers is so poorly understood that its potential impact on sea level rise remains unpredictable. Therefore, there is a dire need to predict the behavior of these ice bodies by understanding their bed topography and basal conditions, particularly near their grounding lines (the limit between grounded ice and floating ice). The ability to detect previous VHF radar returns in some key glacier regions is limited by strong clutter caused by severe ice surface roughness, volume scatter, and increased attenuation induced by water inclusions and debris. 
The work completed in the context of this thesis encompasses the design, integration, and field testing of a new compact light-weight radar and antenna system suitable for low-frequency operation onboard Uninhabited Aerial Systems (UASs). Specifically, this thesis presents the development of two tapered dipole antennas compatible with a 4-meter wingspan UAS. The bow-tie shaped antenna resonates at 35 MHz, and the meandering and resistively loaded element radiates at 14 MHz. Also discussed are the methods and tools used to achieve the necessary bandwidth while mitigating the electromagnetic coupling between the antennas and on-board avionics in a fully populated UAS. The influence of EM coupling on the 14 MHz antenna was nominal due to relatively longer wavelength. However, its input impedance had to be modified by resistive loading in order to avoid high power reflections back to the transmitter. The antenna bandwidths were further enhanced by employing impedance matching networks that resulted in 17.3% and 7.1% bandwidths at 35 MHz and 14 MHz, respectively. 
Finally, a compact 4 lbs. system was validated during the 2013-2014 Antarctic deployment, which led to echo sounding of more challenging temperate ice in the Arctic Circle. The thesis provides results obtained from data collected during a field test campaign over the Russell glacier in Greenland compared with previous data obtained with a VHF depth sounder system operated onboard a manned aircraft. 

 

 


KELLY RODRIGUEZ

Analysis of Extracellular Recordings and Temporal Encoding in Delayed-Feedback Reservoir

When & Where:


1 Eaton Hall

Committee Members:

Yang Yi, Chair
Randolph Nudo
Shannon Blunt


Abstract

Technological advancements in analog and digital systems have enabled new approaches to study networks of physical and artificial neurons. In biological systems, a standard method to record neuronal activity is through cortically implanted micro electrode arrays (MEAs). As advances in hardware continue to push channel counts of commercial MEAs upwards, it becomes imperative to develop automatic methods for data acquisition and analysis with high accuracy and throughput. Reliable, low latency methods are critical in closed-loop neuroprosthetic paradigms such as spike-timing dependent applications where the activity of a single neuron triggers specific stimuli with millisecond precision. This work presents an adapted version of an online spike detection algorithm, previously employed successfully on in vitro recordings, that has been improved to work under more stringent in vivo environments subject to additional sources of variability and noise. The algorithm’s performance was compared with other commonly employed detection techniques for neural data on a newly developed and highly tunable extracellular recording model that features variable firing rates, adjustable SNRs, and multiple waveform characteristics. The testing framework was created from in vivo recordings collected during quiescence and electrical stimulation periods. The algorithm presents superior performance and efficiency in all evaluated conditions. Furthermore, we propose a methodology for online signal integrity analysis from MEA recordings and quantification of neuronal variability across different experimental settings. This work constitutes a stepping stone toward the creation of large scale neural data processing pipelines and aims to facilitate reproducibility in activity dependent experiments by offering a method for unifying various metrics calculated from single unit activity. Precise spike detection becomes crucial for experiments studying temporal in addition to rate coding mechanisms. To further study and exploit the potential of temporal coding, a delay-feedback-based reservoir (DFB) has been implemented in software. This artificial network is found to be capable of processing spikes encoded from a benchmark task with performance comparable to that of more complex networks. This work allows us to corroborate the capabilities of temporal coding in a minimally-complex system suitable for implementation in physical hardware and inclusion in low-power circuit applications where computational power is also necessary.

 

 


SALEH ESHTAIWI

A New Model Predictive Control Technique Based Maximum Power Point Tracking For Photovoltaic Systems

When & Where:


2001B Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Chris Allen
Jerzy Grzymala-Busse
Ron Hui
Elaina Sutley

Abstract

The worldwide energy demand is being increased day by day, anticipated to increase for 48% from 2012 to 2040. The distributed generation (DG) including renewable energy resources such as wind and solar are part of the solution in terms of lowering electricity cost, power reliability, and environmental concerns and therefore must function efficiently. Designing a robust maximum power point tracking (MPPT) technique can ensure maximized energy harvesting from PV solar systems and increases conversion efficiency which is the significant hindrance for their growth. The maximum power point (MPP) varies with intrinsic and climate changes nonlinearly. Thus, MPPT methods are expected to seek the MPP regardless of the solar module and ambient changes. The proposed method is based on the concept of Model Predictive Control (MPC) with unique properties. MPC is a powerful class of controllers that uses a system modeling to predict future behavior and optimize performance objectives. Unlike the traditional techniques that are prone to lose a tracking direction and their consequences on the stability, the proposed technique treats the photovoltaic (PV) module as a plant and uses a digital observer for predicting the behavior of the PV module and tracking the MPP. Further, it unifies the simplicity of implementation, enhances the overall dynamics performance and is robust against atmosphere changes.


ELI SYMM

Wavelets in Electromagnetic Profile Inversion

When & Where:


2001B Eaton Hall

Committee Members:

Jim Stiles, Chair
Chris Allen
Ron Hui


Abstract

Historical subsurface sensing methods applied to planar ice and snow sheets rely on underlying assumptions about the physical situation governing volumetric backscatter. Namely, the stratification of the natural medium under investigation consists of layered material with distinctly different dielectric properties. While appropriate for recovering sharp spatial discontinuities in the relative permittivity, the layer stripping approach [1] is not applicable to smooth permittivity variations about a common mean. In this project we developed techniques to model both the forward scattering from one-dimensional permittivity variation and the inverse problem - estimating the permittivity profile from the reflected energy. The underlying assumption is that smoothly varying inhomogeneities may be decomposed into wavelet basis functions which efficiently represent natural perturbations about an effective mean. Potential applications for this method are in ground penetrating radar, ionospheric sounding, nondestructive evaluation, and medical imaging.


MICHAEL STEES

Robust High Order Mesh Generation and Untangling

When & Where:


317 Nichols Hall

Committee Members:

Suzanne Shontz, Chair
Perry Alexander
Prasad Kulkarni
Jim Miller
Weizhang Huang

Abstract

Simulating the mechanics of a beating heart requires the numerical solution of partial differential equations. An application like this is a good candidate for high order computational methods that deliver higher solution accuracy at a lower cost than their low order counterparts. 
To fully leverage these high order computational methods, they must be paired with an accurate discretization of the domain. For a geometry like the heart, this requires a high order mesh. Thus robust high order mesh generation is a critical component to the widespread adoption of high order computational methods for numerically solving partial differential equations. Toward this end, we are developing high order mesh generation and untangling methods. As our first step, we have developed an optimization-based second order mesh generation method that employs triangles and tetrahedra. We will also develop generation methods for quadrilateral and hexahedral elements. Finally, we will develop untangling methods that can be used to untangle our generated meshes, as well as untangle any tangled elements that occur during motion (e.g. the beating of the heart). 


PRASANTH VIVEKANANDAN

A Simplex Architecture for Intelligent and Safe Unmanned Aerial Vehicles

When & Where:


250 Nichols Hall

Committee Members:

Heechul Yun, Chair
Prasad Kulkarni
Bo Luo


Abstract

Unmanned Aerial Vehicles (UAVs) are increasingly demanded in civil, military and research purposes. However, they also possess serious threats to the society because faults in UAVs can lead to physical damage or even loss of life. While increasing their intelligence, for example, adding vision-based sense-and-avoid capability, has a potential to reduce the safety threats, increased software complexity and the need for higher 
computing performance create additional challenges—software bugs and transient hardware faults—that must be addressed to realize intelligent and safe UAV systems. 
This work present a fault tolerant system design for UAVs. Our proposal is to use two heterogeneous hardware and software platforms with distinct reliability and performance characteristics: High-Assurance (HA) and High-Performance (HP) platforms. The HA platform focuses on simplicity and 
verfiability in software and uses a simple and transient fault tolerant processor, while the HP platform focuses on intelligence and functionality in software and uses a complex and high performance processor. During the normal operation, the HP platform is responsible for controlling the UAV. However, if it fails due to transient hardware faults or software bugs, the HA platform will take over until the HP platform recovers. 
We have implemented the proposed design on an actual UAV using a low-cost Arduino and a high-performance Tegra TK1 multicore platform. Our case-studies show that our design can improve safety without compromising performance and intelligence of the UAV. 


YUANWEI WU

Learning Deep Neural Networks for Object Detection and Tracking

When & Where:


317 Nichols Hall

Committee Members:

Richard Wang, Chair
Arvin Agah
Lingjia Liu
Bo Luo
Haiyang Chao

Abstract

Scene understanding in both static images and dynamic videos is the ultimate goal in computer vision. As two important sub-tasks of this endeavor, object detection and tracking have been extensively studied in the past decades, however, the problem is still not well addressed. The main challenge is that the appearance of objects is affected by a number of factors, such as scale, occlusion, illumination, and so on. Recently, deep learning has attracted lots of interests in the computer vision community. However, how to tackle these challenges in object detection and tracking is still an open problem. In this work, we propose a method for detecting objects in images using a single deep neural network, which can be optimized end-to-end and predict the object bounding boxes and class probabilities in one evaluation. To handle the challenges in object tracking, we propose a framework, which consists of a novel deep Convolutional Neural Networks (CNNs) to effectively generate robust spatial appearance, and a Long Short-term Memory (LSTM) network that incorporates temporal information to achieve long-term object tracking accuracy in real-time.


LAKSHMI KOUTHA

Advanced Encoding Schemes and their Hardware Implementations for Brain Inspired Computing

When & Where:


2001B Eaton Hall

Committee Members:

Yang Yi, Chair
Chris Allen
Glenn Prescott


Abstract

According to Moore’s law the number of transistors per square inch double every two years. Scaling down technology reduces size and cost however, also increases the number of problems. Our current computers using Von-Neumann architectures are seeing progressive difficulties not only due to scaling down the technology but also due to grid-lock situation in its architecture. As a solution to this, scientists came up architectures whose function resembles that of the brain. They called these brains inspired architectures, neuromorphic computers. The building block of the brain is the neuron which encodes, decodes and processes the data. The neuron is known to accept sensory information and converts this information into a spike train. This spike train is encoded by the neuron using different ways depending on the situation. Rate encoding, temporal encoding, population encoding, sparse encoding and rate-order encoding are a few encoding schemes said to be used by the neuron. These different neural encoding schemes are discussed as the primary focus of the thesis. A comparison between these different schemes is also provided for better understanding, thus helping in the design of an efficient neuromorphic computer. This thesis also focusses on hardware implementation of a neuron. Leaky Fire and Integrate neuron model has been used in this work which uses spike-time dependent encoding. Different neuron models are discussed with a comparison as to which model is effective under which circumstances. The electronic neuron model was implemented using 180nm CMOS Technology using Global Foundries PDK libraries. Simulation results for the neuron are presented for different inputs and different excitation currents. These results show the successful encoding of sensory information into a spike train.


PENG SENG TAN

Addressing Spectrum Congestion by Spectrally-Cooperative Radar Design

When & Where:


250 Nichols Hall

Committee Members:

Jim Stiles, Chair
Shannon Blunt
Chris Allen
Lingjia Liu
Tyrone Duncan

Abstract

Due to the increasing need for greater Radio Frequency (RF) spectrum by mobile apps like Facebook and Instagram, high data-rate communication protocols like 5G and the Internet of Things, it has led to the issue of spectrum congestion as radar systems have traditionally maintain the largest share of the RF spectrum. To resolve the spectrum congestion problem, it has become even necessary for users from both types of systems to coexist within a finite spectrum allocation. However, this then leads to other problems such as the increased likelihood of mutual interference experienced by all users that are coexisting within the finite spectrum. 

In this dissertation, we propose to address the problem of spectrum congestion via a two-step approach. The first step of this approach involves designing an optimal sparse spectrum allocation scheme to radar systems such that the radar range resolution performance can be maintained with a smaller resulting bandwidth at a cost of degraded sidelobe performance. The second step of this approach involves designing radar waveforms that possesses good spectral containment property by expanding the framework of Polyphase-coded Frequency Modulated (PCFM) waveforms to higher-order representations such that these waveforms will mitigate issues of interference experienced by other systems when both systems are coexisting within the same band.