Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

XUN WU

A Global Discretization Approach to Handle Numerical Attributes as Preprocessing Presenter

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Discretization is a common technique to handle numerical attributes in data mining, and it divides continuous values into several intervals by defining multiple thresholds. Decision tree learning algorithms, such as C4.5 and random forests, are able to deal with numerical attributes by applying discretization technique and transforming them into nominal attributes based on one impurity-based criterion, such as information gain or Gini gain. However, there is no doubt that a considerable amount of distinct values are located in the same interval after discretization, through which digital information delivered by the original continuous values are lost. 
In this thesis, we proposed a global discretization method that is able to keep the information within the original numerical attributes by expanding them into multiple nominal ones based on each of the candidate cut-point values. The discretized data set, which includes only nominal attributes, evolves from the original data set. We analyzed the problem by applying two decision tree learning algorithms, namely C4.5 and random forests, respectively to each of the twelve pairs of data sets (original and discretized data sets) and evaluating the performances (prediction accuracy rate) of the obtained classification models in Weka Experimenter. This is followed by two separate Wilcoxon tests (each test for one learning algorithm) to decide whether there is a level of statistical significance among these paired data sets. Results of both tests indicate that there is no clear difference in terms of performances by using the discretized data sets compared to the original ones. 


YUFEI CHENG

Future Internet Routing Design for Massive Failures and Attacks

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li
Gary Minden
Michael Vitevitch

Abstract

With the increasing frequency of natural disasters and intentional attacks that challenge the optical network, vulnerability to cascading and regional-correlated challenges is escalating. Given the high complexity and large traffic load of the optical networks, the correlated challenges pose great damage to reliable network communication. We start our research by proposing a critical regional identification mechanism and study different vulnerability scales using real-world physical network topologies. We further propose geographical diversity and incorporate it into a new graph resilience metric cTGGD (compensated Total Geographical Graph Diversity), which is capable of characterizing and differentiating resiliency level from different physical networks. We propose path geodiverse problem (PGD) and two heuristics for solving the problem with less complexity compared to the optimal algorithm. The geodiverse paths are optimized with a delay-skew optimization formulation for optimal traffic allocation. We implement GeoDivRP in ns-3 to employ the optimized paths and demonstrate their effectiveness compared to OSPF Equal-Cost Multi-Path routing (ECMP) in terms of both throughput and overall link utilization. As from the attackers perspective, we have analyzed the mechanism by which the attackers could use to maximize the attack impact with a limited budget and demonstrate the effectiveness of different network restoration plans.


DARSHAN RAMESH

Configuration of Routing Protocols on Routers using Quagga

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Victor Frost
Glenn Prescott


Abstract

With the increasing number of devices being connected to the network, efficient connection of those devices to the network is very important. The routing protocols have evolved through time. I have used Mininet and Quagga to implement the routing protocols in a topology with ten routers and eleven host machines. Initially the basic configuration of the routers is done to bring its interfaces administratively up and the IP addresses are assigned. Static routes are configured on the routers using Quagga zebra daemons. With the amount of overhead observed, static protocol is replaced with RIPv2 using the Quagga ripd daemon and the features of RIPv2 are implemented like MD5 authentication and split horizon. RIPv2 is replaced with OSPF routing protocol. The differences between static and dynamic protocol are observed. Complex OSPF applications are implemented using the Quagga ospfd daemon. The best route to the neighboring routers is changed using the OSPF cost attribute. Next the networks in the lab are 
assumed to belong to different autonomous systems and BGP is implemented using the Quagga bgpd daemon. The routing updates are filtered using the access list attributes. The path to the neighboring routers is changed using BGP metrics such as MED, weight, AS_PATH and local_pref. Load balancing is also implemented and the results are verified using traceroute and routing tables.


RUXIN XIE

Single-Fiber-Laser-Based Multimodal Coherent Raman System

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Shannon Blunt
Victor Frost
Carey Johnson

Abstract

Single-fiber-laser-based coherent Raman scattering (CRS) spectroscopy and microscopy system can automatically maintain frequency synchronization between pump and Stokes beam, which dramatically simplifies the setup configuration. The Stokes frequency shift is generated by soliton self-frequency shift (SSFS) through a photonic crystal fiber. The impact of pulse chirping on the signal power reduction of coherent anti-Stokes Raman scattering (CARS) and stimulated Raman scattering (SRS) have been investigate through theoretical analysis and experiment. The strategies of system optimization is discussed. 
Our multimodal system provides measurement diversity among CARS, SRS and photothermal, which can be used for comparison and offering complementary information. Distribution of hemoglobin in human red blood cells and lipids in sliced mouse brain sample have been imaged. Frequency and power dependency of photothermal signal is characterized. 
Instead of using intensity modulated pump, the polarization switched SRS method is applied to our system by changing the polarization of the pump. Based on the polarization dependency of the third-order susceptibility of the material, this method is able to eliminate the nonresonant photothermal signal from the resonant SRS signal. Red blood cells and sliced mouse brain samples were imaged to demonstrate the capability of the proposed technique. The result shows that polarization switched SRS removes most of the photothermal signal. 


VENU GOPAL BOMMU

Performance Analysis of Various Implementations of Machine Learning Algorithms

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Luke Huan
Bo Luo


Abstract

Rapid development in technologies and database systems result in producing and storing large amounts of data. With such an enormous increase in data over the last few decades, data mining became a useful tool to discover the knowledge hidden in large data. Domain experts often use machine learning algorithms for finding theories that would explain their data. 
In this project we compare Weka implementation of CART and C4.5 with their original implementation on different datasets from University of California Irvine (UCI). Comparisons of these implementations has been carried in terms of accuracy, decision tree complexity and area under ROC curve (AUC). Results from our experiments show that the decision tree complexity of C4.5 is much higher than CART and that the original implementation of these algorithms perform slightly better than their corresponding Weka implementation in terms of accuracy and AUC. 


SRI HARSHA KOMARINA

System Logging and Analysis using Time Series Databases

When & Where:


2001B Eaton Hall

Committee Members:

Joseph Evans, Chair
Prasad Kulkarni
Bo Luo


Abstract

Logging system information and its metrics provides us with a valuable resource to monitor the system for unusual activity and understand the various factors affecting its performance. Though there are several tools that are available to log and analyze the system locally, it is inefficient to individually analyze every system and is seldom effective in case of hardware failure. Having centralized access to this information aids the system administrators in performing their operational tasks. Here we present a centralized logging solution for system logs and metrics by using Time Series Databases (TSDB). We provide reliable storage and efficient access to system information by storing the parsed system logs and metrics in a TSDB. In this project, we develop a solution to store the system’s default log storage - syslog as well as the system metrics like CPU load, disk load, and network traffic load into a TSDB. We further extend our ability to monitor and analyze the data in our TSDB by using an open source graphing tool. 


EVAN AUSTIN

Theorem Provers as Libraries — An Approach to Formally Verifying Functional Programs

When & Where:


246 Nichols Hall

Committee Members:

Perry Alexander, Chair
Arvin Agah
Andy Gill
Prasad Kulkarni
Erik Van Vleck

Abstract

Property-directed verification of functional programs tends to take one of two paths. 
First, is the traditional testing approach, where properties are expressed in the original programming language and checked with a collection of test data. 
Alternatively, for those desiring a more rigorous approach, properties can be written and checked with a formal tool; typically, an external proof system. 
This dissertation details a hybrid approach that captures the best of both worlds: the formality of a proof system paired with the native integration of an embedded, domain specific language (EDSL) for testing. 

At the heart of this hybridization is the titular concept -- \emph{a theorem prover as a library}. 
The verification capabilities of this prover, HaskHOL, are introduced to a Haskell development environment as a GHC compiler plugin. 
Operating at the compiler level provides for a comparatively simpler integration and allows verification to co-exist with the numerous other passes that stand between source code and program. 

The logical connection between language and proof library is formalized, and the open problems related to this connection are documented. 
Additionally, the resultant, novel verification workflow is applied to two major classes of problems, type class laws and polymorphic test cases, to judge the real-world feasibility of compiler-directed verification. 
These applications and formalization serve to position this work relative to existing work and to highlight potential, future extensions.


CAMERON LEWIS

Ice Shelf Melt Rates and 3D Imaging

When & Where:


317 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Chris Allen
Carl Leuschen
Fernando Rodriguez-Morales
Rick Hale

Abstract

Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf’s grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3 D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3 D image of an ice shelf. The 3 D images can then be used to obtain a more complete estimate of bottom melt rates of ice shelves.


RALPH BAIRD

Isomorphic Routing Protocol

When & Where:


250 Nichols Hall

Committee Members:

Victor Frost, Chair
Bo Luo
Hossein Saiedian


Abstract

A mobile ad-hoc network (MANET) routing algorithm defines the path packets take to reach their destination using measurements of attributes such as adjacency and distance. Graph theory is increasingly applied in many fields of research today to model the properties of data on a graph plane. Graph theory is applied to in networking to form structures from patterns of nodes. Conventional MANET protocols are often based on path measurements from wired network algorithms and do not implement mechanisms to mitigate route entropy, defined as the procession of a converged path to a path loss state as a result of increasing random movement. Graph isomorphism measures equality beginning in the individual node and in sets of nodes and edges. The measurement of isomorphism is applied in this research to form paths from an aggregate set of route inputs, such as adjacency, cardinality to impending nodes in a path, and network width. A routing protocol based on the presence of isomorphism in a MANET topology is then tested to measure the performance of the proposed routing protocol.


DAIN VERMAAK

Application of Half Spaces in Bounding Wireless Internet Signals for use in Indoor Positioning

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Jim Miller
Gary Minden


Abstract

The problem of outdoor positioning has been largely solved via the use of GPS. This thesis addresses the problem of determining position in areas where GPS is unavailable. No clear solution exists for indoor localization and all approximation methods offer unique drawbacks. To mitigate the drawbacks, robust systems combine multiple complementary approaches. In this thesis, fusion of wireless internet access points and inertial sensors was used to allow indoor positioning without the need for prior information regarding surroundings. Implementation of the algorithm involved development of three separate systems. The first system simply combines inertial sensors on the Android Nexus 7 to form a step counter capable of providing marginally accurate initial measurements. Having achieved reliable initial measurements, the second system receives signal strength from nearby wireless internet access points, augmenting the sensor data in order to generate half-planes. The half-planes partition the available space and bound the possible region in which each access point can exist. Lastly, the third system addresses the tendency of the step counter to lose accuracy over time by using the recently established positions of the access points to correct flawed values. The resulting process forms a simple feedback loop.