Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Md Mashfiq Rizvee
Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance EcosystemsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli
Abstract
Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.
Fatima Al-Shaikhli
Optical Measurements Leveraging Coherent Fiber Optics TransceiversWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairShannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu
Abstract
Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.
Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.
We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.
In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.
Past Defense Notices
Ronald Andrews
Evaluating the Proliferation and Pervasiveness of Leaking Sensitive Data in the Secure Shell Protocol and in Internet Protocol Camera FrameworksWhen & Where:
246 Nichols Hall
Committee Members:
Alex Bardas, ChairFengjun Li
Bo Luo
Abstract
In George Orwell's 1984, there is fear regarding what “Big Brother”, knows due to the fact that even thoughts could be “heard”. Though we are not quite to this point, it should concern us all in what data we are transferring, both intentionally and unintentionally, and whether or not that data is being “leaked”. In this work, we consider the evolving landscape of IoT devices and the threat posed by the pervasive botnets that have been forming over the last several years. We look at two specific cases in this work. One being the practical application of a botnet system actively executing a Man in the Middle Attack against SSH, and the other leveraging the same paradigm as a case of eavesdropping on Internet Protocol (IP) cameras. For the latter case, we construct a web portal for interrogating IP cameras directly for information that they may be exposing.
Kevin Carr
Development of a Multichannel Wideband Radar DemonstratorWhen & Where:
317 Nichols Hall, (Moore Conference Room)
Committee Members:
Carl Leuschen, ChairFernando Rodriguez-Morales
James Stiles
Abstract
With the rise of software defined radios (SDR) and the trend towards integrating more RF components into MMICs the cost and complexity of multichannel radar development has gone down. High-speed RF data converters have seen continuous increases in both sampling rate and resolution, further rendering a growing subset of components in an RF chain unnecessary. A recent development in this trend is the Xilinx RFSoC, which integrates multiple high speed data converters into the same package as an FPGA. The Center for Remote Sensing of Ice Sheets (CReSIS) is regularly upgrading its suite of sensor platforms spanning from HF depth sounders to Ka band altimeters. A radar platform was developed around the RFSoC to demonstrate the capabilities of the chip when acting as a digital backend and evaluate its role in future radar designs at CReSIS. A new ultra-wideband (UWB) FMCW RF frontend was designed that consists of multiple transmit and receive modules operating at microwave frequencies with multi-GHz bandwidth. An antenna array was constructed out of printed-circuit elements to validate radar system performance. Firmware developed for the RFSoC enables radar features that will prove useful in future sensor platforms used for the remote sensing of snow, soil moisture, or crop canopies.
Ruturaj Kiran Vaidya
Implementing SoftBound on Binary ExecutablesWhen & Where:
2001 B Eaton Hall
Committee Members:
Prasad Kulkarni, ChairAlex Bardas
Drew Davidson
Abstract
Though languages like C and C++ are known to be memory unsafe, they are still used widely in industry because of their memory management features, low level nature and performance benefits. Also, as most of the systems software has been written using these languages, replacing them with memory safe languages altogether is currently impossible. Memory safety violations are commonplace, despite the fact that that there have been numerous attempts made to conquer them using source code, compiler and post compilation based approaches. SoftBound is a compiler-based technique that enforces spatial memory safety for C/C++ programs. However, SoftBound needs and depends on program information available in the high-level source code. The goal of our work is to develop a mechanism to efficiently and effectively implement a technique, like SoftBound, to provide spatial memory safety for binary executables. Our approach employs a combination of static-time analysis (using Ghidra) and dynamic-time instrumentation checks (using PIN). Softbound is a pointer based approach, which stores base and bound information per pointer. Our implementation determines the array and pointer access patterns statically using reverse engineering techniques in Ghidra. This static information is used by the Pin dynamic binary instrumentation tool to check the correctness of each load and store instruction at run-time. Our technique works without any source code support and no hardware or compiler alterations are needed. We evaluate the effectiveness, limitations, and performance of our implementation. Our tool detects spatial memory errors in about 57% of the test cases and induces about 6% average overhead over that caused by a minimal pintool.
Chinmay Ratnaparkhi
A comparison of data mining based on a single local probabilistic approximation and the MLEM2 algorithmWhen & Where:
2001 B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, ChairFengjun Li
Bo Luo
Abstract
Observational data produced in scientific experimentation and in day to day life is a valuable source of information for research. It can be challenging to extract meaningful inferences from large amounts of data. Data mining offers many algorithms to draw useful inferences from large pools of information based on observable patterns.
In this project I have implemented one such data mining algorithm for determining a single local probabilistic approximation, which also computes the corresponding ruleset; and compared it with two versions of the MLEM2 algorithm which induce a certain rule set and a possible rule set respectively. For experimentation, eight data sets with 35% missing values were used to induce corresponding rulesets and classify unseen cases. Two different interpretations of missing values were used, namely, lost values and do not care conditions. k-fold cross validation technique was employed with k=10 to identify error rates in classification.
The goal of this project was to compare how accurately unseen cases are classified by the rulesets induced by each of the aforementioned algorithms. Error rate calculated from the k-fold cross validation technique was also used to observe how each type of interpretation of missing values affects the ruleset.
Govind Vedala
Digital Compensation of Transmission Impairments in Multi-Subcarrier Fiber Optic Transmission SystemsWhen & Where:
246 Nichols Hall
Committee Members:
Ron Hui, ChairChristopher Allen
Erik Perrins
Alessandro Salandrino
Carey Johnson
Abstract
Time and again, fiber optic medium has proved to be the best means for transporting global data traffic which is following an exponential growth trajectory. Rapid development of high bandwidth applications since the past decade based on virtual reality, 5G and big data to name a few have resulted in a sudden surge of research activities across the globe to maximize effective utilization of available fiber bandwidth which until then was supporting low speed services like voice and low bandwidth data traffic. To this end, higher order modulation formats together with multi-subcarrier superchannel based fiber optic transmission systems have proved to enhance spectral efficiency and achieve multi terabit per second data rates. However, spectrally efficient systems are extremely sensitive to transmission impairments stemming from both optical devices and fiber itself. Therefore, such systems mandate the use of robust digital signal processing (DSP) to compensate and/or mitigate the undesired artifacts, thereby extending the transmission reach. The central theme of this dissertation is to propose and validate few efficient DSP techniques to compensate specific impairments as delineated in the next three paragraphs.
For short reach applications, we experimentally demonstrate a digital compensation technique to undo semiconductor optical amplifier (SOA) and photodiode nonlinearity effects by digitally backpropagating the received signal through a virtual SOA with inverse gain characteristics followed by an iterative algorithm to cancel signal-signal beat interference arising from photodiode. We characterize the phase dynamics of comb lines from a quantum dot passive mode locked laser based on a novel multiheterodyne coherent detection technique. In the context of multi-subcarrier, Nyquist pulse shaped, superchannel transmission system with coherent detection, we demonstrate through measurements and numerical simulations an efficient phase noise compensation technique called “Digital Mixing” that operates using a shared pilot tone exploiting the mutual phase coherence among the comb lines.
Finally, we propose and experimentally validate a practical pilot aided relative phase noise compensation technique for forward pumped distributed Raman amplified, digital subcarrier multiplexed coherent transmission systems.
Tong Xu
Real-time DSP-enabled digital subcarrier cross-connect (DSXC) for optical communication systems and networksWhen & Where:
246 Nichols Hall
Committee Members:
Ron Hui, ChairChristopher Allen
Esam Eldin Aly
Erik Perrins
Jie Han
Abstract
Elastic optical networking (EON) is intended to offer flexible channel wavelength granularity to meet the requirement of high spectral efficiency (SE) in today’s optical networks. However, optical cross-connects (OXC) and switches based on optical wavelength division multiplexing (WDM) are not flexible enough due to the coarse bandwidth granularity imposed by optical filtering. Thus, OXC may not meet the requirements of many applications which require finer bandwidth granularities than that carried by an entire wavelength channel.
In order to achieve highly flexible and fine enough bandwidth granularities, electrical digital subcarrier cross-connect (DSXC) can be utilized in EON. As presented in this thesis, my research work focuses on the investigation and implementation of real-time digital signal processing (DSP) enabled DSXC which can dynamically assign both bandwidth and power to each individual sub-wavelength channel, known as subcarrier. This DSXC is based on digital sub-carrier multiplexing (DSCM), which is a frequency division multiplexing (FDM) technique that multiplexes a large number of digitally created subcarriers on each optical wavelength. Compared with OXC based on optical WDM, DSXC based on DSCM has much finer bandwidth granularities and flexibilities for dynamic bandwidth allocation.
Based on a field programmable gate array (FPGA) hardware platform, we have designed and implemented a real-time DSP enabled DSXC which uses Nyquist FDM as the multiplexing scheme. For the first time, we demonstrated resampling filters for channel selection and frequency translation, which enabled real-time DSXC. This circuit-based DSXC supports flexible and fine data-rate subcarrier channel granularities, offering a low latency data plane, transparency to modulation formats, and the capability of compensating transmission impairments in the digital domain. The experimentally demonstrated 8×8 DSXC makes use of a Virtex-7 FPGA platform, which supports any-to-any switching of eight subcarrier channels with mixed modulation formats and data rates. Digital resampling filters, which enable frequency selections and translations of multiple subcarrier channels, have much lower DSP complexity and reduced FPGA resources requirements (DSP slices used in FPGA) in comparison to the traditional technique based on I/Q mixing and filtering.
We have also investigated the feasibility of using the distributed arithmetic (DA) architecture for real-time DSXC to completely eliminate the need of DSP slices in FPGA implementation. For the first time, we experimentally demonstrated the implementation of real-time frequency translation and channel selection based on the DA architecture in the same FPGA platform. Compared with resampling filters that leverage multipliers, the DA-based approach eliminates the need of DSP slices in the FPGA implementation and significantly reduces the hardware cost. In addition, by requiring the time of only a few clock cycles, a DA-based resampling filter is significantly faster when compared to a conventional FIR filter whose overall latency is proportional to the filter order. The DA-based DSXC is, therefore, able to achieve not only the improved spectral efficiency, programmability of multiple orthogonal subcarrier channels, and low hardware resources requirements, but also much reduced cross-connection latency when implemented in a real-time DSP hardware platform. This reduced latency of cross-connect switching can be critically important for time-sensitive applications such as 5G mobile fronthaul, cloud radio access network (C-RAN), cloud-based robot control, tele-surgery and network gaming.
Levi Goodman
Dual Mode W-Band Radar for Range Finding, Static Clutter Suppression & Moving Target DetectionWhen & Where:
250 Nichols Hall
Committee Members:
Christopher Allen, ChairShannon Blunt
James Stiles
Abstract
Many radar applications today require accurate, real-time, unambiguous measurement of target range and radial velocity. Obstacles that frequently prevent target detection are the presence of noise and the overwhelming backscatter from other objects, referred to as clutter.
In this thesis, a method of static clutter suppression is proposed to increase detectability of moving targets in high clutter environments. An experimental dual-purpose, single-mode, monostatic FMCW radar, operating at 108 GHz, is used to map the range of stationary targets and determine range and velocity of moving targets. By transmitting a triangular waveform, which consists of alternating upchirps and downchirps, the received echo signals can be separated into two complementary data sets, an upchirp data set and a downchirp data set. In one data set, the return signals from moving targets are spectrally isolated (separated in frequency) from static clutter return signals. The static clutter signals in that first data set are then used to suppress the static clutter in the second data set, greatly improving detectability of moving targets. Once the moving target signals are recovered from each data set, they are then used to solve for target range and velocity simultaneously.
The moving target of interest for tests performed was a reusable paintball (reball). Reball range and velocity were accurately measured at distances up to 5 meters and at speeds greater than 90 m/s (200 mph) with a deceleration of approximately 0.155 m/s/ms (meters per second per millisecond). Static clutter suppression of up to 25 dB was achieved, while moving target signals only suffered a loss of about 3 dB.
Ruoting Zheng
Algorithms for Computing Maximal Consistent BlocksWhen & Where:
2001 B Eaton Hall
Committee Members:
Jerzy Grzymala-Busse, ChairPrasad Kulkarni
Bo Luo
Abstract
Rough set theory is a tool to deal with uncertain and incomplete data. It has been successfully used in classification, machine learning and automated knowledge acquisition. A maximal consistent block defined using rough set theory, is used for rule acquisition.
Maximal consistent block technique is applied to acquire knowledge in incomplete data sets by analyzing the structure of a similarity class.
The main objective of this project is to implement and compare the algorithms for computing the maximal consistent blocks. The brute force method, recursive method and hierarchical method were designed for the data sets with missing attribute values interpreted only as “do not care” conditions. In this project, we extend these algorithms so they can be applied to arbitrary interpretations of missing attribute values, and an approach for computing maximal consistent blocks on the data sets with lost values is introduced in this project. Besides, we found that the brute force method and recursive method have problems dealing with the data sets for which characteristic sets are not transitive, so the limitations of the algorithms and a simplified recursive method are provided in the project as well.
Hao Xue
Trust and Credibility in Online Social NetworksWhen & Where:
246 Nichols Hall
Committee Members:
Fengjun Li, ChairPrasad Kulkarni
Bo Luo
Cuncong Zhong
Mei Liu
Abstract
Increasing portions of people's social and communicative activities now take place in the digital world. The growth and popularity of online social networks (OSNs) have tremendously facilitate the online interaction and information exchange. Not only normal users benefit from OSNs as more people now rely online information for news, opinions, and social networking, but also companies and business owners who utilize OSNs as platforms for gathering feedback and marketing activities. As OSNs enable people to communicate more effectively, a large volume of user generated content (UGC) is produced daily. However, the freedom and ease of of publishing information online has made these systems no longer the sources of reliable information. Not only does biased and misleading information exist, financial incentives drive individual and professional spammers to insert deceptive content and promote harmful information, which jeopardizes the ecosystems of OSNs.
In this dissertation, we present our work of measuring the credibility of information and detect content polluters in OSNs. Firstly, we assume that review spammers spend less effort in maintain social connections and propose to utilize the social relationships and rating deviations to assist the computation of trustworthiness of users. Compared to numeric ratings, textual content contains richer information about the actual opinion of a user toward a target. Thus, we propose a content-based trust propagation framework by extracting the opinions expressed in review content. In addition, we discover that the surrounding network around a user could also provide valuable information about the user himself. Lastly, we study the problem of detecting social bots by utilizing the characteristics of surrounding neighborhood networks.
Casey Sader
Taming WOLF: Building a More Functional and User-Friendly FrameworkWhen & Where:
2001 B Eaton Hall
Committee Members:
Michael Branicky , ChairBo Luo
Suzanne Shontz
Abstract
Machine learning is all about automation. Many tools have been created to help data scientists automate repeated tasks and train models. These tools require varying levels of user experience to be used effectively. The ``machine learning WOrk fLow management Framework" (WOLF) aims to automate the machine learning pipeline. One of its key uses is to discover which machine learning model and hyper-parameters are the best configuration for a dataset. In this project, features were explored that could be added to make WOLF behave as a full pipeline in order to be helpful for novice and experienced data scientists alike. One feature to make WOLF more accessible is a website version that can be accessed from anywhere and make using WOLF much more intuitive. To keep WOLF aligned with the most recent trends and models, the ability to train a neural network using the TensorFlow framework and Keras library were added. This project also introduced the ability to pickle and save trained models. Designing the option for using the models to make predictions within the WOLF framework on another collection of data is a fundamental side-effect of saving the models. Understanding how the model makes predictions is a beneficial component of machine learning. This project aids in that understanding by calculating and reporting the relative importance of the dataset features for the given model. Incorporating all these additions to WOLF makes it a more functional and user-friendly framework for machine learning tasks.