Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Md Mashfiq Rizvee
Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance EcosystemsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli
Abstract
Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.
Fatima Al-Shaikhli
Optical Measurements Leveraging Coherent Fiber Optics TransceiversWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairShannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu
Abstract
Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.
Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.
We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.
In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.
Past Defense Notices
ANDREW FARMER
HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell CompilerWhen & Where:
250 Nichols Hall
Committee Members:
Andy Gill, ChairPerry Alexander
Prasad Kulkarni
Jim Miller
Chris Depcik
Abstract
It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation.
When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning.
This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance.
Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed.
HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast.
JAY McDANIEL
Design, Integration, and Miniaturization of a Multichannel Ultra-Wideband Snow Radar Receiver and Passive Microwave ComponentsWhen & Where:
129 Nichols
Committee Members:
Carl Leuschen, ChairStephen Yan
Prasad Gogineni
Abstract
To meet the demand for additional snow characterization from the Intergovernmental Panel on Climate Change (IPCC), a new “Airborne” Multichannel, Quad-Polarized 2-18GHz Snow Radar has been proposed. With tight size and weight constraints from the airborne platforms deploying with the Navy Research Laboratory (NRL), the need for integrated and miniaturized receivers for cost and size reduction is crucial for future deployments.
A set of heterodyne microwave receivers were developed to enable snow thickness measurements from a survey altitude of 500 feet to 5000 feet while nadir looking, and estimation of SWE from polarimetric backscattered signals at low elevation 30 degree off nadir. The individual receiver has undergone a five times size reduction with respect to initial prototype design, while achieving a sensitivity of -125 dBm on average across the 2-18 GHz bandwidth, enabling measurements with a vertical range resolution of 1.64 cm in snow. The design of a compact enclosure was defined to accommodate up to 18 individual receiver modules allowing for multichannel quad-polarized measurements over the entire 16 GHz bandwidth. The receiver bank was tested individually and with the entire system in a full multichannel loop-back measurement, using a 2.95 μs optical delay line, resulting in a beat frequency of 200 MHz with 20 dB range side lobes. Due to the multi-angle, multi-polarization, and multi-frequency content from the data , the number of free parameters in the SWE estimation can thus be significantly reduced.
Design equations have been derived and a new method for modeling Suspended Substrate Stripline (SSS) filters in ADS for rapid-prototyping has been accomplished. Two SSS filters were designed which include an Optimized Chebyshev SSS Low Pass Filter (LPF) with an 18 GHz cutoff frequency and a Broadside Coupled SSS High Pass Filter (HPF) with a 2 GHz cutoff frequency. Also, a 2-18 GHz three-port Transverse Electromagnetic (TEM) Mode Hybrid 8:1 power combiner was designed and modeled at CReSIS. This design will be integrated into the Vivaldi Dual Polarized antenna array with 8 active dual-polarized elements to implement a lightweight and compact array structure, eliminating cable and connector cost and losses.
VADIRAJ HARIBAL
Modelling of ATF-38143 P-HEMT Driven Resistive Mixer for VHF KNG P-150 Portable RadiosWhen & Where:
250 Nichols Hall
Committee Members:
Ron Hui, ChairChris Allen
Alessandro Salandrino
Abstract
FET resistive mixers play a key role in providing high linearity and low noise figure levels. HEMT technology with low threshold voltage has popularized mobile phone market and milli-meter wave technologies. The project analyzes working of a down-conversion VHF FET resistive mixer model designed using ultra-low noise ATF -38143 P-HEMT. Its widely used in KNG-P150 portable mobile radios manufactured by RELM Wireless Corporation. The mixer is designed to function within RF frequency range from 136Mhz -174Mhz at an IF frequency of 51.50Mhz. Statz model has been used to simulate the working of P-HEMT under normal conditions. Transfer function of matching circuits at each port have been obtained using simulink modelling. Effect of change in Q factor at the RF port and IF port have been considered. Analytical modelling of the mixer is performed and simulated results are compared with experimental data obtained at constant 5dbm LO power. IF transfer function has been modelled to closely match the practical circuits by applying adequate amplitude damping to the response of LC circuits at the RF port, in order to provide the required IF bandwidth and conversion gain. Effect of stray capacitances and inductances have been neglected during the modelling, and changes in series resistance of inductors at RF port and IF port have been made to match experimental results.
MOHAMMED ALENAZI
Network Resilience Improvement and Evaluation Using Link AdditionsWhen & Where:
246 Nichols Hall
Committee Members:
James Sterbenz, ChairVictor Frost
Lingjia Liu
Bo Luo
Tyrone Duncan
Abstract
Computer networks are getting more involved in providing services for most of our daily life activities related to education, business, health care, social life, and government. Publicly available computer networks are prone to targeted attacks and natural disasters that could disrupt normal operation and services. Building highly resilient networks is an important aspect of their design and implementation. For existing networks, resilience against such challenges can be improved by adding more links. In fact, adding links to form a full mesh yields the most resilient network but it incurs an unfeasible high cost. In this research, we investigate the resilience improvement of real-world networks via adding a cost-efficient set of links. Adding a set of links to obtain optimal solution using an exhaustive search is impracticable for large networks. Using a greedy algorithm, a feasible solution is obtained by adding a set of links to improve network connectivity by increasing a graph robustness metric such as algebraic connectivity or total path diversity. We use a graph metric called flow robustness as a measure for network resilience. To evaluate the improved networks, we apply three centrality-based attacks and study their resilience. The flow robustness results of the attacks show that the improved networks are more resilient than the non-improved networks.
WENRONG ZENG
Content-Based Access ControlWhen & Where:
250 Nichols Hall
Committee Members:
Bo Luo, ChairArvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei
Abstract
In conventional database, the most popular access control model specifies policies explicitly for each role of every user against each data object manually. Nowadays, in large-scale content-centric data sharing, conventional approaches could be impractical due to exponential explosion of the data growth and the sensitivity of data objects. What’s more, conventional database access control policy will not be functional when the semantic content of data is expected to play a role in access decisions. Users are often over-privileged, and ex post facto auditing is enforced to detect misuse of the privileges. Unfortunately, it is usually difficult to reverse the damage, as (large amount of) data has been disclosed already. In this dissertation, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access "a subset" of the designated data objects of a content-centric database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracles Virtual Private Database (VPD) to implement a row-wise access control and to prevent data objects from being abused by unneccessary access admission. To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further reveal a more relavant part of the data objects comparing with only using the user credentials and data content. We also utilized several tagging mechanisms for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to represent the content of data. In the tagging mechanism, the similarity of content is calculated not purely dependent on the word occurences but the semantic topics underneath the text content. Experimental results show that CBAC makes accurate access control decisions with a small overhead.
RANJITH KRISHNAN
The Xen Hypervisor : Construction of a Test Environment and Validation by Performing Performance Evaluation of Native Linux versus Xen GuestsWhen & Where:
246 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairBo Luo
Heechul Yun
Abstract
Modern computers are powerful enough to comfortably support running multiple Operating Systems at the same time. Enabling this is the Xen hypervisor, an open-source tool which is one of most widely used System Virtualization solutions in the market. Xen enables Guest Virtual Machines to run at near native speeds by using a concept called Paravirtualization. The primary goal of this project is to construct a development/test environment where we can investigate the different types of virtualization Xen supports. We start on a base of Fedora onto which Xen is built and installed. Once Xen is running, we configure both Paravirtualized and Hardware Virtualized Guests.
The second goal of the project is to validate the environment constructed by doing a performance evaluation of constructed test environment. Various performance benchmarks are run on native Linux, Xen Host and the two important types of Xen Guests. As expected, our results show that the performance of the Xen guest machines are close to native Linux. We also see proof of why virtualization-aware Paravirtualization performs better than Hardware Virtualization which runs without any knowledge of the underlying virtualization infrastructure.
JUSTIN METCALF
Signal Processing for Non-Gaussian Statistics: Clutter Distribution Identification and Adaptive Threshold EstimationWhen & Where:
129 Nichols
Committee Members:
Shannon Blunt, ChairLuke Huan
Lingjia Liu
Jim Stiles
Tyrone Duncan
Abstract
We examine the problem of determining a decision threshold for the binary hypothesis test that naturally arises when a radar system must decide if there is a target present in a range cell under test. Modern radar systems require predictable, low, constant rates of false alarm (i.e. when unwanted noise and clutter returns are mistaken for a target). Measured clutter returns have often been fitted to heavy tailed, non-Gaussian distributions. The heavy tails on these distributions cause an unacceptable rise in the number of false alarms. We use the class of spherically invariant random vectors (SIRVs) to model clutter returns. SIRVs arise from a phenomenological consideration of the radar sensing problem, and include both the Gaussian distribution and most commonly reported non-Gaussian clutter distributions (e.g. K distribution, Weibull distribution).
We propose an extension of a prior technique called the Ozturk algorithm. The Ozturk algorithm generates a graphical library of points corresponding to known SIRV distributions. These points are generated from linked vectors whose magnitude is derived from the order statistics of the SIRV distributions. Measured data is then compared to the library and a distribution is chosen that best approximates the measured data. Our extension introduces a framework of weighting functions and examines both a distribution classification technique as well as a method of determining an adaptive threshold in data that may or may not belong to a known distribution. The extensions are then compared to neural networking techniques. Special attention is paid to producing a robust, adaptive estimation of the detection threshold. Finally, divergence measures of SIRVs are examined.
ALHANOOF ALTHNIAN
Evolutionary Learning of Goal-Oriented Communication Strategies in Multi-Agent SystemsWhen & Where:
246 Nichols Hall
Committee Members:
Arvin Agah, ChairJerzy Grzymala-Busse
Prasad Kulkarni
Bo Luo
Sara Kieweg
Abstract
Multi-agent systems are a common paradigm for building distributed systems in different domains such as networking, health care, swarm sensing, robotics, and transportation. Performance goals can vary from one application to the other according to the domain's specifications and requirements. Yet, performance goals can vary over the course of task execution. For example, agents may initially be interested in completing the task as fast as possible, but if their energy hits a specific level while still working on the task, they might, then need to switch their goal to minimize energy consumption. Previous studies in multi-agent systems have observed that varying the type of information that agents communicate, such as goals and beliefs, has a significant impact on the performance of the system with respect to different, usually conflicting, performance metrics, such as speed of solution, communication efficiency, and travel distance/cost. Therefore, when designing a communication strategy for a multi-agent system, it is unlikely that one strategy can perform well with respect to all of performance metrics. Yet, it is not clear in advance, which strategy or communication decisions will be the best with respect to each metric. Previous approaches to communication decisions in multi-agent systems either manually design a single/multiple fixed communication strategies, extend agents' capabilities and use heuristics, or allow learning a strategy with respect to a single predetermined performance goal. To address this issue, this research introduces goal-oriented communication strategy, where communication decisions are determined based on the desired performance goal. This work proposes an evolutionary approach for learning a goal-oriented communication strategy in multi-agent systems. The approach enables learning an effective communication strategy with respect to simple or complex measurable performance goals. The learned strategy will determine what, when, and to whom the information should be communicated during the course of task execution.
JASON GEVARGIZIAN
Executables from Program Slices for Java ProgramsWhen & Where:
250 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairPerry Alexander
Andy Gill
Abstract
Program slicing is a popular program decomposition and analysis technique
that extracts only those program statements that are relevant to particular points
of interest. Executable slices are program slices that are independently executable
and that correctly compute the values in the slicing criteria. Executable slices
can be used during debugging and to improve program performance through
parallelization of partially overlapping slices.
While program slicing and the construction of executable slicers has been
studied in the past, there are few acceptable executable slicers available,
even for popular languages such as Java.
In this work, we provide an extension to the T. J. Watson Libraries for
Analysis (WALA), an open-source Java application static analysis suite, to
generate fully executable slices.
We analyze the problem of executable slice generation in the context
of the capabilities provided and algorithms used by the WALA library.
We then employ this understanding to augment the existing WALA static SSA slicer
to efficiently track non-SSA datapendence, and couple this component with
our exectuable slicer backend.
We evaluate our slicer extension and find that it produces accurate
exectuable slices for all programs that fall within the limitations of the
WALA SSA slicer itself.
Our extension to generate executable program slices facilitates one of the
requirements of our larger project for a Java application automatic
partitioner and parallelizer.
DAVID HARVIE
Targeted Scrum: Software Development Inspired by Mission CommandWhen & Where:
246 Nichols Hall
Committee Members:
Arvin Agah, ChairBo Luo
James Miller
Hossein Saiedian
Prajna Dhar
Abstract
Software engineering and mission command are two separate but similar fields, as both are instances of complex problem solving in environments with ever changing requirements. Both fields have followed similar paths from using industrial age decomposition to deal with large problems to striving to be more agile and resilient. Our research hypothesis is that modifications to agile software development based on inspirations from mission command can improve the software engineering process in terms of planning, prioritizing, and communication of software requirements and progress, as well as improving the overall software product. Targeted Scrum is a modification of Traditional Scrum based on three inspirations from Mission Command: End State, Line of Effort, and Targeting. These inspirations have led to the introduction of the Product Design Meeting and modifications of some current Scrum meetings and artifacts. We tested our research hypothesis using a semester-long undergraduate level software engineering class. Students in teams developed two software projects, one using Traditional Scrum and the other using Targeted Scrum. We then assessed how well both methodologies assisted the software development teams in planning and developing the software architecture, prioritizing requirements, and communicating progress. We also evaluated the software product produced by both methodologies. It was determined that Targeted Scrum did better in assisting the software development teams in the planning and prioritization of the requirements. However, Targeted Scrum had a negligible effect on improving the software development teams’ external and internal communications. Finally, Targeted Scrum did not have an impact on the product quality by the top performing and worst performing teams. Targeted Scrum did assist the product quality of the teams in the middle of the performance spectrum.