Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Md Mashfiq Rizvee
Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance EcosystemsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli
Abstract
Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.
Fatima Al-Shaikhli
Optical Measurements Leveraging Coherent Fiber Optics TransceiversWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairShannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu
Abstract
Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.
Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.
We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.
In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.
Past Defense Notices
MARK CALNON
Assistive Robotics for the Elderly: Encouraging Trust and Acceptance Using Affective Body LanguageWhen & Where:
2001B Eaton Hall
Committee Members:
Arvin Agah, ChairFrank Brown
Jerzy Grzymala-Busse
Bo Luo
Richard Branham
Abstract
"Assistive robotics for the elderly has become a significant area of research, driven primarily by a rapidly aging global population. Between 2011 and 2050, the number of people aged 60 and over is expected to climb from 893 million to 2.4 billion. In addition to a rapidly aging global population, a growing shortage of caretakers has placed additional urgency on the search for alternative solutions.
Despite the potential benefits of assistive robotics, one significant hurdle that remains is designing robots that the elderly are willing to use. Not only must assistive robots be effective at monitoring and caring for the elderly, but they must also be acceptable to a wide range of elderly individuals. While a variety of factors can influence the acceptability of a robot, past research has focused primarily on the physical embodiment of the robot.
Social robotics, however, uses human-robot interactions to study the many social factors that can influence the acceptability of a robot, including affective behaviors, or behaviors that simulate personality and emotion. While the majority of research in affective behaviors has focused on facial expressions, these methods require sophisticated anthropomorphic representations, which are not generally preferred by the elderly.
However, in addition to facial expressions, body language can also be an effective communicator of emotions and personalities. This research will demonstrate the effectiveness of using non-verbal behaviors to simulate a variety of personalities and emotions on the Aldebaran Nao, as well as the impact these behaviors have on the elderly’s trust and acceptance of the assistive robot. By adapting the personalities and emotions of an assistive robot both to the task it is performing, as well as to individual elderly users, this research will ultimately enable assistive robots to perform as more effective caretakers for the elderly."
DAVID HARVIE
Targeted Scrum: Software Development Inspired by Mission CommandWhen & Where:
2001B Eaton Hall
Committee Members:
Arvin Agah, ChairBo Luo
Jim Miller
Hossein Saiedian
Prajna Dhar
Abstract
Software development has been and continues to be a difficult enterprise. As early as the 1960’s, computer experts recognized the need to develop working and reliable software, within budget and on time. One of the major obstacles to successful software development is the inevitability of changing requirements. Agile software development methods, such as Extreme Programming and Scrum, emerged in the 1990’s as responses to deal with constant change. However, agile software development methods have their own set of weaknesses. Two specific weaknesses with Scrum are a lack of initial planning and a lack of an overall architecture.
Military operations are another field that must deal with constantly changing requirements in complex environments. In response to this inescapable change, the military these days has primarily employed mission command to direct operations. Mission command is the philosophy where a commander gives subordinates his/her intent and desired end state for an operation, and then the subordinates have appropriate flexibility to operate to achieve that intent and end state. This research effort seeks to use inspirations from mission command to improve certain aspects of agile software development, namely Scrum, both in terms of the process and the product.
This research effort seeks to address the lack of initial planning in Scrum with the addition of a Product Design Meeting at the onset of the process. This effort also addresses the lack of an overall architecture using two artifacts derived from mission command, the product’s end state and lines of effort (LOEs) necessary to achieve that end state. The addition of the Product Design Meeting, product end state, and LOEs will add more formalism to an agile method. However, we hypothesize that the benefits of adding those techniques will offset any perceived loss of agility.
SAHANA RAGHUNANDAN
Analysis of Angle of Arrival Estimation Algorithms for Basal Ice Sheet TomographyWhen & Where:
317 Nichols Hall
Committee Members:
John Paden, ChairShannon Blunt
Carl Leuschen
Abstract
One of the key requirements for deriving more realistic ice sheet models is to obtain a good set of basal measurements that enable accurate estimation of bed roughness and conditions. For this purpose, 3D tomography of the ice bed has been successfully implemented with the help of angle of arrival estimation (AoA) algorithms such as multiple signal classification (MUSIC) and maximum likelihood estimation (MLE) techniques. These methods have enabled fine resolution in the cross-track dimension using synthetic aperture radar (SAR) images obtained from single pass multichannel data. This project analyzes and compares the results obtained from the spectral MUSIC algorithm, an alternating projection (AP) based MLE technique, and a relatively recent approach called the reiterative superresolution (RISR) algorithm. While the MUSIC algorithm is more attractive computationally compared to MLE, the performance of the latter is known to be superior in a low signal to noise ratio regime. The RISR algorithm takes a completely different approach by using a recursive implementation of the minimum mean square error (MMSE) estimation technique instead of using the sample covariance matrix (SCM) that is central to subspace based algorithms. This renders the algorithm more robust in scenarios where there is a very low sample support. The SAR focused datasets provide a good case study to explore the performance of the three techniques to the application of ice sheet bed elevation estimation.
EHSAN HOSSEINI
Synchronization Techniques for Burst-Mode Continuous Phase ModulationWhen & Where:
250 Nichols Hall
Committee Members:
Erik Perrins, ChairShannon Blunt
Lingjia Liu
Dave Petr
Tyrone Duncan
Abstract
Synchronization is a critical operation in digital communication systems, which establishes and maintains an operational link between transmitter and the receiver. As the advancement of digital modulation and coding schemes continues, the synchronization task becomes more and more challenging since the new standards require high-throughput functionality at low signal-to-noise ratios (SNRs). In this work, we address feedforward synchronization of continuous phase modulations (CPMs) using data-aided (DA) methods, which are best suited for burst-mode communications. In our transmission model, a known training sequence is appended to the beginning of each burst, which is then affected by additive white Gaussian noise (AWGN), and unknown frequency, phase, and timing offsets.
Based on our transmission model, we derive the optimum training sequence for DA synchronization of CPM signals using the Cramer-Rao bound (CRB), which is a lower bound on the estimation error variance. It is shown that the proposed sequence minimizes the CRB for all three synchronization parameters, and can be applied to the entire CPM family. We take advantage of the structure of the optimized training sequence in order to derive a maximum likelihood joint timing and carrier recovery algorithm. Moreover, a frame synchronization algorithm is proposed, and hence, a complete synchronization scheme is presented in this work.
The proposed training sequence and synchronization algorithm are extended to shaped-offset quadrature phase-shift keying (SOQPSK) modulation, which is considered for next generation aeronautical telemetry systems. Here, it is shown that the optimized training sequence outperforms the one that is defined in the draft telemetry standard as long as estimation error variances are considered. The overall bit error rate suggest that the optimized training sequence with a shorter length can be utilized such that the SNR loss is less than 0.5 dB of an ideal synchronization scenario.
MARTIN KUEHNHAUSEN
A Framework for Knowledge Derivation Incorporating Trust and Quality of DataWhen & Where:
246 Nichols Hall
Committee Members:
Victor Frost, ChairLuke Huan
Bo Luo
Gary Minden
Tyrone Duncan
Abstract
Today, across all major industries gaining insight from data is seen as an essential part of business. However, while data gathering is becoming inexpensive and relatively easy, analysis and ultimately deriving knowledge from it is increasingly difficult. In many cases, there is the problem of too much data such that important insights are hard to find. The problem is often not lack of data but whether knowledge derived from it is trustworthy. This means distinguishing “good” from “bad” insights based on factors such as context and reputation. Still, modeling trust and quality of data is complex because of the various conditions and relationships in heterogeneous environments.
The new TrustKnowOne framework and architecture developed in this dissertation addresses these issues by describing an approach to fully incorporate trust and quality of data with all its aspects into the knowledge derivation process. This is based on Berlin, an abstract graph model we developed that can be used to model various approaches to trustworthiness and relationship assessment as well as decision making processes. In particular, processing, assessment, and evaluation approaches are implemented as graph expressions that are evaluated on graph components modeling the data.
We have implemented and applied our framework to three complex scenarios using real data from public data repositories. As part of their evaluation we highlighted how our approach exhibits both the formalization and flexibility necessary to model each of the realistic scenarios. The implementation and evaluation of these scenarios confirms the advantages of the TrustKnowOne framework over current approaches.
YUANLIANG MENG
Building an Intelligent Knowledgebase of Brachiopod PaleontologyWhen & Where:
246 Nichols Hall
Committee Members:
Luke Huan, ChairBrian Potetz
Bo Luo
Abstract
Science advances not only because of new discoveries, but also due to revolutionary ideas drawn from accumulated data. The quality of studies in paleontology, in particular, depends on accessibility of fossil data. This research builds an intelligent system based on brachiopod fossil images and their descriptions published in Treatise on Invertebrate Paleontology. The project is still developing and some significant achievements will be discussed here.
This thesis has two major parts. The first part describes the digitization, organization and integration of information extracted from the Treatise. The Treatise is in PDF format and it is non-trivial to convert large volumes into a structured, easily accessible digital library. Three important topics will be discussed: (1) how to extract data entries from the text, and save them in a structured manner; (2) how to crop individual specimen images from figures automatically, and associate each image with text entries; (3) how to build a search engine to perform both keyword search and natural language search. The search engine already has a web interface and many useful tasks can be done with ease.
Verbal descriptions are second-hand information of fossil images and thus have limitations. The second part of the thesis develops an algorithm to compare fossil images directly, without referring to textual information. After similarities between fossil images are calculated, we can use the results in image search, fossil classification, and so on. The algorithm is based on deformable templates, and utilizes expectation propagation to find the optimal deformation. Specifically, I superimpose a ``warp'' on each image. Each node of the warp encapsulates a vector of local texture features, and comparing two images involves two steps: (1) deform the warp to the optimal configuration, so the energy function is minimized; and (2) based on the optimal configuration, compute the distance of two images. Experiment results confirmed that the method is reasonable and robust.
WILLIAM DINKEL
Instrumentation and Evaluation of Distributed ComputationsWhen & Where:
246 Nichols Hall
Committee Members:
Victor Frost, ChairArvin Agah
Prasad Kulkarni
Abstract
Distributed computations are a very important aspect of modern computing, especially given the rise of distributed systems used for applications such as web search, massively multiplayer online games, financial trading, and cloud computing. When running these computations across several physical machines it becomes much more difficult to determine exactly what is occurring on each system at a specific point in time. This is due to each server having an independent clock, thus making event timestamps inherently inaccurate across machine boundaries. Another difficulty with evaluating distributed experiments is the coordination required to launch daemons, executables, and logging across all machines, followed by the necessary gathering of all related output data. The goal of this research is to overcome these obstacles and construct a single, global timeline of events from all servers.
We employ high-resolution clock synchronization to bring all servers within microseconds as measured by a modified version of the Network Time Protocol implementation. Kernel and user-level events with wall-clock timestamps are then logged during basic network socket experiments. These data are then collected from each server and merged into a single dataset, sorted by timestamp, and plotted on a timeline. The entire experiment, from setup to teardown to data collection, is coordinated from a single server. The timeline visualizations provide a narrative of not only how packets flow between servers, but also how kernel interrupt handlers and other events shape an experiment's execution.
DANIEL HEIN
Detecting Attack Prone Software Using Architecture and Repository Mined Change MetricsWhen & Where:
2001B Eaton Hall
Committee Members:
Hossein Saiedian, ChairArvin Agah
Perry Alexander
Prasad Kulkarni
Reza Barati
Abstract
Billions of dollars are lost every year to successful cyber attacks that are fundamentally enabled by software vulnerabilities. Modern cyber attacks increasingly threaten individuals, organizations, and governments, causing service disruption, inconvenience,and costly incident response. Given that such attacks are primarily enabled by software vulnerabilities, this work examines whether or not existing change metrics, along with architectural modularity and maintainability metrics can be used to predict modules and files that might be analyzed or tested further to excise vulnerabilities prior to release.
The problem addressed by this research is the residual vulnerability problem, or vulnerabilities that evade detection and persist in released software. Many modern software projects are over a million lines of code, composed of reused components of varying maturity. The sheer size of modern software, along with the reuse of existing open source modules, complicates the questions of where to look, and in what order to look, for residual vulnerabilities. Prediction models based on various code and change metrics (e.g.,churn) have shown promise as indicators of vulnerabilities at the file level.
This work investigates whether change metrics, along with architectural metrics quantifying modularity and maintainability can be used to identify attack-prone modules. In addition to identifying or predicting attack prone files, this work also examines prioritization and ranking said predictions.
BEN PANZER
Estimating Geophysical Properties of Snow and Sea Ice from Data Collected by an Airborne, Ultra-Wideband RadarWhen & Where:
317 Nichols Hall
Committee Members:
Carl Leuschen, ChairChris Allen
Prasad Gogineni
Fernando Rodriguez-Morales
Richard Hale
Abstract
Large-scale spatial observations of the global sea ice thickness and distribution rely on multiple satellite-based altimeters. Laser altimeters, such as the GLAS instrument aboard ICESat-1 and ATLAS instrument aboard ICESat-2, measure freeboard which is the snow and ice thickness above mean sea level. Deriving sea-ice thickness from these data requires estimating the snow depth on the sea ice. Current means of estimating the snow depth are climatological history, daily precipitation products, and/or data from passive microwave sensors, such as AMSR-E. Radar altimeters, such as SIRAL aboard CryoSat-2, do not have sufficient vertical range resolution to resolve both the air-snow and snow-ice interfaces over sea-ice. Additionally, there is significant ambiguity on the location of the peak return due to penetration into the snow cover. Regardless of the sensor, any error in snow-depth estimation amplifies sea-ice thickness errors due to the assumption of hydrostatic equilibrium used in deriving sea-ice thickness. There clearly is a need for an airborne sensor to provide spatially large-scale measurements of the snow cover in both polar regions to improve the accuracy of sea-ice thickness estimates and provide validation for the satellite-based sensors.
The Snow Radar was developed at the Center for Remote Sensing of Ice Sheets and deployed as part of NASA Operation IceBridge since 2009 to directly measure snow thickness over sea ice. The radar is an ultra-wideband, frequency-modulated, continuous-wave radar now working over the frequency range of 2 GHz to 8 GHz, resulting in a vertical range resolution of approximately 4 cm after post-processing. The radar has been shown to be capable of detecting snow depth over sea ice from 10 cm to more than 2 meters and results from the radar compare well to multiple in-situ measurements and passive-microwave measurements.
The focus of the proposed research is estimation of useful geophysical properties of snow-covered sea ice beyond snow depth and subsequent refinement and validation of the snow depth extraction. Geophysical properties of interest are: snow density and wetness, air-snow and snow-ice surface roughness, and sea ice temperature and salinity. Through forward modeling of the radar backscatter response and the corresponding inversion, large-scale estimation of these properties may be possible.
GOUTHAM SELVAKUMAR
Constructing an Environment and Providing a Performance Assesment of Android's Dalvik Virtual Machine on x86 andWhen & Where:
250 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairVictor Frost
Xin Fu
Abstract
Android is one of the most popular operating systems (OS) for mobile touchscreen devices, including smart-phones and tablet computers. Dalvik is a process virtual machine (VM) that provides an abstraction layer over the Android OS, and runs the Java-based Android applications. The first goal of this project is to construct a development environment for conveniently investigating the properties of Android's Dalvik VM on contemporary x86 and ARM architectures. The normal development environment restricts the Dalvik VM to run on top of Android, and requires an updated Android image to be built and installed on the target device after any change to the Dalvik code. This update-build-install process unnecessarily slows down any Dalvik VM exploration. We have now discovered a (undisclosed) configuration that enables us to study the Dalvik VM as a stand-alone application on top of the Linux OS.
The second goal of this project is to understand the translation/compilation sub-system in the Dalvik VM, experiment with various modifications to determine the best translation parameters, and compare the Dalvik VM's just-in-time (JIT) compilation characteristics (such as quality of code generated and compilation time) on the x86 and ARM systems with a state-of-the-art Java VM. As expected, we find that JIT compilation is able to significantly improve application performance over basic interpretation. Comparing Dalvik's generated code quality with the Java HotSpot VM, we observe that Dalvik's ARM target is a much more mature compared to Dalvik-x86. However, Dalvik's simple trace-based compilation generates code quality that is much worse as compared to HotSpot. Finally, our experiments also reveal the most effective JIT compilation parameters for the Dalvik VM, and its effect of benchmark performance and memory usage.