Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

ASHWINI SHIKARIPUR NADIG

Statitistical Approaches to Inferring Object Shape from Single Images

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Brian Potetz
Luke Huan
Jim Miller
Paul Selden

Abstract

Depth inference is a fundamental problem of computer vision with a broad range of potential applications. Monocular depth inference techniques, particularly shape from shading dates back to as early as the 40's when it was first used to study the shape of the lunar surface. Since then there has been ample research to develop depth inference algorithms using monocular cues. Most of these are based on physical models of image formation and rely on a number of simplifying assumptions that do not hold for real world and natural imagery. Very few make use of the rich statistical information contained in real world images and their 3D information. There have been a few notable exceptions though. The study of statistics of natural scenes has been concentrated on outdoor natural scenes which are cluttered. Statistics of scenes of single objects has been less studied, but is an essential part of daily human interaction with the environment. This thesis focuses on studying the statistical properties of single objects and their 3D imagery, uncovering some interesting trends, which can benefit shape inference techniques. I acquired two databases: Single Object Range and HDR (SORH) and the Eton Myers Database of single objects, including laser-acquired depth, binocular stereo, photometric stereo and High Dynamic Range (HDR) photography. The fractal structure of natural images was previously well known, and thought to be a universal property. However, my research showed that the fractal structure of single objects and surfaces is governed by a wholly different set of rules. Classical computer vision problems of binocular and multi-view stereo, photometric stereo, shape from shading, structure from motion, and others, all rely on accurate and complete models of which 3D shapes and textures are plausible in nature, to avoid producing unlikely outputs. Bayesian approaches are common for these problems, and hopefully the findings on the statistics of the shape of single objects from this work and others will both inform new and more accurate Bayesian priors on shape, and also enable more efficient probabilistic inference procedures.


STEVE PENNINGTON

Spectrum Coverage Estimation Using Large Scale Measurements

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Arvin Agah
Victor Frost
Gary Minden
Ronald Aust

Abstract

The work presented in this thesis explores the use of geographic data and geostatistical methods to estimate path loss for cognitive radio networks. Path loss models typically employed in this scenario use a general terrain type (i.e., urban, suburban, or rural) and possibly a digital elevation model to predict excess path loss over the free space model. Additional descriptive knowledge of the local environment can be used to make more accurate path loss predictions. This research focuses on the use of visible imagery, digital elevation models, and terrain classification systems for predicting localized propagation characteristics. A low-cost data collection platform was created and used to generate a sufficiently large spectrum measurement set for machine learning. A series of path loss models were fitted to the data using linear and nonlinear methods. These models were then used to create a radio environment map depicting estimated signal strength. All of the models created have good cross-validated prediction results when compared to existing path loss models, although some of the more flexible models had a tendency to overfit the data. A number of geostatistical models were fitted on the data as well. 
These models have the advantage of not requiring the transmitter location in order to create a model. The geostatistical models performed very well when given a sufficient density of observations but were not able to generalize as well as some of the regression models. An analysis of the geographical data sets indicated that each had a significant measurable effect on path loss estimation, with the medium resolution imagery and elevation data providing the greatest increase in accuracy. Finally, these models were compared to number of existing path loss models, demonstrating a gain in usable spectrum for cognitive radio network use.


BENJAMIN EWY

Collaborative Approaches to Probabilistic Reasoning in Network Management

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Arvin Agah
Victor Frost
Gary Minden
Bozenna Pasik-Duncan

Abstract

Tactical networks, networks designed to facilitate command and control capabilities for militaries, have key attributes that differ from the commercial Internet. Characterizing, modeling, and ex- ploiting our understanding of these differences is the focus of this research. 
The differences between tactical and commercial networks can be found primarily in the areas of access bandwidth, access diversity, access latency, core latency, subnet distribution, and network infrastructure. In this work we characterize and model these differences. These key attributes affect research into issues such as peer-to-peer protocols, service discovery, and server selection among others, as well as the deployment of services and systems in tactical networks. Researchers traditionally struggle with measuring, analyzing, or testing new ideas on tactical networks due to a lack of direct access, and thus this characterization is crucial to evolving this research field. 
In this work we develop a topology generator that creates realistic tactical networks that can be visualized, analyzed, and emulated. 
Topological features including geographically constrained line of sight networks, high density low bandwidth satellite networks, and the latest high bandwidth on- the-move networks are captured. All of these topological features can be mixed to create realistic networks for many different tactical scenarios. A web based visualization tool is developed, as well as the ability to export topologies to the Mininet network virtualization environment. 
Finally, state-of-the-art server selection algorithms are reviewed and found to perform poorly for tactical networks. We develop a collaborative algorithm tailored to the attributes of tactical networks, and utilize our generated networks to assess the algorithm, finding a reduction in utilized bandwidth and a significant reduction in client to server latency as key improvements.


MEENAKSHI MISHRA

Task Relationship Modeling in Multitask Learning with Applications to Computational Toxicity

When & Where:


246 Nichols Hall

Committee Members:

Luke Huan, Chair
Arvin Agah
Swapan Chakrabarti
Ron Hui
Zhou Wang

Abstract

Multitask Learning is a learning framework which explores the concept of sharing training information among multiple related tasks to improve the generalization error of each task. The benefits of multitask learning have been shown both empirically and theoretically. There are a number of fields that benefit from multitask learning, including toxicology. However, the current multitask learning algorithms make a very important key assumption that all the tasks are related to each other in a similar fashion in multitask learning. The users often do not have the knowledge of which tasks are related and train all tasks together. This results in sharing of training information even among the unrelated tasks. Training unrelated tasks together can cause a negative transfer and deteriorate the performance of multitask learning. For example, consider the case of predicting in vivo toxicity of chemicals at various endpoints from the chemical structure. Toxicity at all the endpoints are not related. Since, biological networks are highly complex, it is also not possible to predetermine which endpoints are related. Thus, training all the endpoints together may cause a negative effect on the overall performance. This proposal aims at developing algorithms which make use of task relationship models to further improve the generalization error and prevent transfer of information among the unrelated tasks. The algorithms proposed here either learn the task relationships or utilize the known task relationships in the learning framework. Further, these algorithms will be utilized to predict toxicity of chemicals at various endpoints using the chemical structures and the results of multiple in vitro assays performed on these chemicals.


YINGYING MA

A Comparison of Two Discretization Options of the MLEM2 Algorithm

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Luke Huan
Prasad Kulkarni


Abstract

A rule set is a popular symbolic representation of knowledge derived from 
data. A rule induction is an important technique of data mining or machine 
learning. Many rule induction algorithms are widely used, such as LEM1, LEM2 and MLEM2. Some of these algorithms perform better on special data, e. g., on inconsistent data set or data sets with missing attribute values. This work discusses basic ideas of the MLEM2 algorithm, especially, how it handles data sets with numeric attribute values. Additionally, a comparison of the performance of different discretization options of the MLEM2 algorithm is also included.


FRANK MOLEY

Maintaining Privacy and Security of Personally Identifiable Information Data in a Connected System

When & Where:


280 Best

Committee Members:

Hossein Saiedian, Chair
Fengjun Li
Bo Luo


Abstract

The large data stores of Personally Identifiable Information (PII) in todays connected systems, coupled with the increased potential damages of Identity Theft bring the need for architectures that provide secure collection, storage, and transmission of this data. The need has not yet been standardized in the industry in a way similar to the Payment Card Industry (PCI) has done so. At the same time, however, municipalities, states, and even countries are instituting legislature that requires business entities that store PII data to maintain adequate security of the data. The need has become clear for a set of processes, procedures, and systems that provide a framework for securely storing PII data. This project defines the lower level datastore system and associated services for that PII data. It also outlines a network architecture prototype for providing segmented security zones used to provide more layers of security in a connected system.


KALYANI HARIDASYAM

AskMyNetwork: Finding Reliable Feedback and Reviews

When & Where:


280 Best

Committee Members:

Hossein Saiedian, Chair
Fengjun Li
Bo Luo


Abstract

We all consult online reviews before obtaining a product or service. However, not all the reviews can be trusted. For example, in 2013, "Operation Clean Turf” a yearlong sting operation in New York State, caught 19different companies that were writing fake reviews in online forums like Yelp for businesses that paid them. For my project, I've developed an application called AskMyNetwork. AskMyNetwork interfaces with Facebook to obtain feedback or input from a user's Facebook friends.The rationale for my project is that the feedback or inputs are from "friends" (personal friends, family members,or colleagues in a user's Facebook friends' list) and can be trusted. 

AskMyNetwork has four major components namely, Login,Search My Network, Ask My Network and Notifications. Using the Login component, the user can login to the application with Facebook credentials. Using Search My Network component, the user can define search criteria (e.g.,search for restaurant in Kansas City) and search his or her Facebook data for relevant results. Using Ask My Network component, the user can ask a group of friends question about a product or service they would like an opinion on. The group of friends can either be chosen by name or by the current location of the friends. Using the Notifications component, the user can view the responses given to questions asked from AskMyNetwork. 

I validated AskMyNetwork via a number of inquiries on topics such as restaurants, places to visit in a city and arts. The results of the validation were satisfactory.


MUHARREM ALI TUNC

LPTV-Aware Bit Loading and Channel Estimation in Broadband PLC for Smart Grid

When & Where:


246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Lingjia Liu
James Sterbenz
Atanas Stefanov

Abstract

Power line communication (PLC) has received steady interest over recent decades because of its economic use of existing power lines, and is one of the communication technologies envisaged for Smart Grid infrastructure. However, power lines are not designed for data communication, and this brings unique challenges for data communication over power lines. In particular for broadband (BB) PLC, the channel exhibits linear periodically time varying (LPTV) behavior synchronous to the AC mains cycle due to time varying impedances, impulsive noise due to switching events in the power line network is present in addition to background noise. In this work, we focus on two major aspects of an orthogonal frequency division multiplexing (OFDM) system for BB PLC LPTV channels; bit and power allocation, and channel estimation (CE). 

For the problem of optimal bit and power allocation, we present that the application of a power constraint that is averaged over many microslots can be exploited for further performance improvements through bit loading. Due to the matroid structure of the optimization problem, greedy-type algorithms are proven to be optimal for the new LPTV-aware bit and power loading. Next, two mechanisms are utilized to reduce the complexity of the optimal LPTV-aware bit loading and peak microslot power levels: employing representative values from microslot transfer functions, and power clipping. 

Next, we introduce a robust CE scheme with low overhead that addresses the drawbacks of block-type pilot arrangement and decision directed CE schemes such as large estimation overhead, and difficulty in channel tracking in the case of sudden changes in the channel, respectively. A transform domain (TD) analysis approach is developed to determine the cause of changes in the channel estimates. The result of TD analysis is then exploited in the proposed scheme to mitigate the effects of LPTV channel and impulsive noise. 

Our results indicate that the proposed reduced complexity LPTV-aware bit loading with power clipping algorithm performs close to the optimal scheme, and the proposed CE scheme based on TD analysis has low estimation overhead and is robust to changes in the channel and noise, making them good alternatives for BB PLC LPTV channels.


BRIAN CORDILL

Radar System Enhancement through High Fidelity Electromagnetic Modeling

When & Where:


129 Nichols

Committee Members:

Sarah Seguin, Chair
Shannon Blunt
Chris Allen
Jim Stiles
Mark Ewing

Abstract

Many of the innovative algorithms that permeate the field of array processing are based on a very simple signal model of an array. This simple, although powerful, model is at times a pale reflection of the complexities inherent in the physical world, and this model mismatch opens the door to the performance degradation of any solution for which the model underpins. This dissertation seeks to explore the impact of model mismatch upon common array processing algorithms. Model mismatch is examined in two ways: First, by developing a blind array calibration routine that estimates model mismatch and incorporates that knowledge into the RISR direction of arrival estimation algorithm. Second, by examining model mismatch between a transmitting and receiving antenna array, and assessing the impact of this mismatch on prolific direction of arrival estimation algorithms. In both of these studies it is shown that engineers have traded algorithm performance of model simplicity, and that if we are willing to deal with the added complexity we can recapture that lost performance.


JOSHUA DAVIS

A Covert Channel Using Named Resources

When & Where:


246 Nichols Hall

Committee Members:

Victor Frost, Chair
Fengjun Li
Bo Luo


Abstract

A method of transmitting information clandestinely over a variety of network protocols is designed and discussed. A demonstrative implementation is created that utilizes the ubiquitous Hypertext Transfer Protocol (HTTP) and the world wide web. Key contributions include the use of access ordering to convey information, and the modulation of transaction level timing to emulate user behavior.