Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
Md Mashfiq Rizvee
Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance EcosystemsWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Sumaiya Shomaji, ChairTamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli
Abstract
Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.
Fatima Al-Shaikhli
Optical Measurements Leveraging Coherent Fiber Optics TransceiversWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairShannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu
Abstract
Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.
Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.
We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.
In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.
Past Defense Notices
ADAM CRIFASI
Framework of Real-Time Optical Nyquist-WDM Receiver using Matlab and SimulinkWhen & Where:
2001B Eaton Hall
Committee Members:
Ron Hui, ChairShannon Blunt
Erik Perrins
Abstract
I investigate an optical Nyquist-WDM Bit Error Rate (BER) detection system. A transmitter and receiver system is simulated, using Matlab and Simulink, to form a working algorithm and to study the effects of the different processes of the data chain. The inherent lack of phase information in the N-WDM scheme presents unique challenges and requires a precise phase recovery system to accurately decode a message. Furthermore, resource constraints are applied by a cost-effective Field Programmable Gate Array (FPGA). To compensate for the speed, gate, and memory constraints of a budget FPGA, several techniques are employed to design the best possible receiver. I study the resource intensive operations and vary their resource utilization to discover the effect on the BER. To conclude, a full VHDL design is delineated, including peripheral initialization, input data sorting and storage, timing synchronization, state machine and control signal implementation, N-WDM demodulation, phase recovery, QAM decoding, and BER calculation.
TIANCHEN LI
Radar Cross-Section Enhancement of a 40 Percent Yak54 Unmanned Aerial VehicleWhen & Where:
2001B Eaton Hall
Committee Members:
Chris Allen, ChairKen Demarest
Ron Hui
Abstract
With increasing civilian use of unmanned aerial vehicles (UAVs), flight safety of these unmanned devices in populated area has become one of the most concerned issues among the operators and users. To reduce the rate of colliding, anti-collision systems based on airborne radar system and enhanced autopilot programs are developed. However, for most civilian UAVs being made of non-metal materials which has considerably low radar cross-section (RCS), those UAVs are really hard or even impossible to be detected by radars. This project aims to design a light-weight UAV mounted RCS enhancement device that can increase the visibility of the UAV for airborne radars which work in the frequency band near
1.445 GHz. In this project, a 40% YAK54 radio controlled UAV is used as the subject UAV. The report also concentrates on the design of passive Van Atta Array reflector approach.
REID CROWE
Development and Implementation of a VHF High Power Amplifier for the Multi-Channel Coherent Radar Depth Sounder/Imager SystemWhen & Where:
317 Nichols Hall
Committee Members:
Fernando Rodriguez-Morales, ChairChris Allen
Carl Leuschen
Abstract
This thesis presents the implementation and characterization of a VHF high power amplifier developed for the Multi-channel Coherent Radar Depth Sounder/Imager (MCoRDS/I) system. MCoRDS/I is used to collect data on the thickness and basal topography of polar ice sheets, ice sheet margins, and fast-flowing glaciers from airborne platforms. Previous surveys have indicated that higher transmit power is needed to improve the performance of the radar, particularly when flying over challenging areas.
The VHF high power amplifier system presented here consists of a 50-W driver amplifier and a 1-kW output stage operating in Class C. Its performance was characterized and optimized to obtain the best tradeoff between linearity, output power, efficiency, and conducted and radiated noise. A waveform pre-distortion technique to correct for gain variations (dependent on input power and operating frequency) was demonstrated using digital techniques.
The amplifier system is a modular unit that can be expanded to handle a larger number of transmit channels as needed for future applications. The system can support sequential transmit/receive operations on a single antenna by using a high-power circulator and a duplexer circuit composed of two 90° hybrid couplers and anti-parallel diodes. The duplexer is advantageous over switches based on PIN-diodes due to the moderately high power handling capability and fast switching time. The system presented here is also smaller and lighter than previous implementations with comparable output power levels.
KENNETH DEWAYNE BROWN
A Mobile Wireless Channel State Recognition AlgorithmWhen & Where:
2001B Eaton Hall
Committee Members:
Glenn Prescott, ChairChris Allen
Gary Minden
Erik Perrins
Richard Hale
Abstract
The scope of this research is a blind mobile wireless channel state recognition (CSR) algorithm that detects channel time and frequency dispersion. Hidden Markov Models (HMM) are utilized to represent the statistical relationship between these hidden channel dispersive state process and an observed received waveform process. The HMMs provide sufficient sensitivity to detect the hidden channel dispersive state process. First-order and second-order statistical features are assumed to be sufficient to discriminate channel state from the receive waveform observations. State hard decisions provide sufficient information, and can be combined, to increase the reliability of a time block channel state estimate. To investigate the feasibility of the proposed CSR algorithm, this research effort has architected, designed, and verified a blind statistical feature recognition process capable of detecting whether a mobile wireless channel is coherent, single time, single frequency, or dual dispersive. Channel state waveforms are utilized to compute the transition and output probability parameters for a set of feature recognition HMMs. Time and frequency statistical features are computed from consecutive sample blocks and input into the set of trained HMMs which compute a state sequence conditional probability for each feature. The conditional probabilities identify how well the input waveform statistically agrees with the previous training waveforms. Hard decisions were produced from each feature state probability estimate and combined to produce a single output channel dispersive state estimate for each input time block. To verify the CSR algorithm performance, combinations of state sequence blocks were input to the process and state recognition accuracy was characterized. Initial results suggest that CSR based on blind waveform statistical feature recognition is feasible.
WENRONG ZENG
Content-Based Access ControlWhen & Where:
250 Nichols Hall
Committee Members:
Bo Luo, ChairArvin Agah
Jerzy Grzymala-Busse
Prasad Kulkarni
Alfred Tat-Kei Ho
Abstract
In conventional database access control models, access control policies are explicitly specified for each role against each data object manually. Nowadays, in large-scale content-centric data sharing,
conventional approaches could be impractical due to exponential explosion and the sensitivity of data objects. In this proposal, we first introduce Content-Based Access Control (CBAC), an innovative access control model for content-centric information sharing. As a complement to conventional access control models, the CBAC model makes access control decisions based on the content similarity between user credentials and data content automatically. In CBAC, each user is allowed by a meta-rule to access “a subset” of the designated data objects of the whole database, while the boundary of the subset is dynamically determined by the textual content of data objects. We then present an enforcement mechanism for CBAC that exploits Oracle’s Virtual Private Database (VPD). To further improve the performance of the proposed approach, we introduce a content-based blocking mechanism to improve the efficiency of CBAC enforcement to further
reveal a more relavant part of the data objects comparing with the user credentials and data content. We also utilized a tagging mechanism for more accurate textual content matching for short text snippets (e.g. short VarChar attributes) to extract topics other than pure word occurences to
represent the content of data. Experimental results show that CBAC makes reasonable access control decisions with a small overhead.
MARIANNE JANTZ
Detecting and Understanding Dynamically Dead Instructions for Contemporary MachinesWhen & Where:
246 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairXin Fu
Man Kong
Abstract
Instructions executed by the processor are dynamically dead if the values they produce are not used by the program. Researchers have discovered that a surprisingly large fraction of executed instructions are dynamically dead. Dynamically dead instructions (DDI) can potentially slow-down program execution and waste power. Unfortunately, although the issue of DDI is well-known, there has not been any comprehensive study to understand and explain the occurence of DDI, evaluate its performance impact, and resolve the problem, especially for contemporary architectures.
The goals of our research are to quantify and understand the properties of DDI, as well as, systematically characterize them for existinng state-of-the-art compilers and popular architectures in order to develop compiler and/or architectural techniques to avoid their execution at runtime. In this thesis, we describe our GCC-based framework to instrument binary programs to generate control-flow and data-flow (registers and memory) traces at runtime. We present the percentage of DDI in our benchmark programs, as well as, characteristics of the DDI. We display that context information can have a siginificant impact on the probability that an instruction will be dynamically dead. We show that a low percentage of static instructions actually contribute to the overall DDI in our benchmark programs. We also describe the outcome of our manual study to analyze and categorize the instances of dead instructions in our x86 benchmarks into seven distinct categories. We briefly describe our plan to develop compiler and architecture based techniques to eliminate each category of DDI in future programs. And finally, we find that x86 and ARM programs, compiled with GCC, generally contain a significant amount of DDI. However, x86 programs present fewer DDI than the ARM benchmarks, which display similar percentages of DDI as earlier research for other architectures. Therefore, we suggest that the ARM architecture observes a non-negligible fraction of DDI and should be examined further. Overall, we believe that a close synergy between static code generation and program execution techniques may be the most effective strategy to eliminate DDI.
YUHAO YANG
Protecting Attributes and Contents in Online Social NetworksWhen & Where:
2001B Eaton Hall
Committee Members:
Bo Luo, ChairArvin Agah
Luke Huan
Prasad Kulkarni
Alfred Tat-Kei Ho
Abstract
With the extreme popularity of online social networks, security and privacy issues become critical. In particular, it is important to protect user privacy without preventing them from normal socialization. User privacy in the context of data publishing and structural re-identification attacks has been well studied. However, protection of attributes and data content was mostly neglected in the research community. While social network data is rarely published, billions of messages are shared in various social networks on a daily basis. Therefore, it is more important to protect attributes and textual content in social networks.
We first study the vulnerabilities of user attributes and contents, in particular, the identifiability of the users when the adversary learns a small piece of information about the target. We have presented two attribute-reidentification attacks that exploit information retrieval and web search techniques. We have shown that large portions of users with online presence are very identifiable, even with a small piece of seed information, and the seed information could be inaccurate.
To protect user attributes and content, we will adopt the social circle model derived from the concepts of “privacy as user perception” and “information boundary”. Users will have different social circles, and share different information in different circles. We propose to automatically identify social circles based on three observations: (1) friends in the same circle are connected and share many friends in common; (2) friends in the same circle are more likely to interact; (3) friends in the same circle tend to have similar interests and share similar content. We propose to adopt multi-view clustering to model and integrate such observations to identify implicit circles in a user’s personal network. Moreover, we propose an evaluation mechanism that evaluates the quality of the clusters (circles).
Furthermore, we propose to exploit such circles for cross-site privacy protection for users –new messages (blogs, micro-blogs, updates, etc) will be evaluated and distributed to the most relevant circle(s). We monitor information distributed to each circle to protect users against information aggregation attacks, and also enforce circle boundaries to prevent sensitive information leakage.
MICHAEL JANTZ
Automatic Cross-Layer Framework to Improve Memory Power and EfficiencyWhen & Where:
246 Nichols Hall
Committee Members:
Prasad Kulkarni, ChairXin Fu
Andy Gill
Bo Luo
Karen Nordheden
Abstract
Recent computing trends include an increased focus on power and energy consumption and the need to support multi-tenant use cases in which physical resources need to be multiplexed efficiently without causing performance interference. Many recent works have focused on how to best allocate CPU, storage and network resources to meet competing service quality objectives and reduce power. At the same time, data-intensive computing is placing larger demands on physical memory systems than ever before. In comparison to other resources, however, it is challenging to obtain precise control over distribution of memory capacity, bandwidth, or power, when virtualizing and multiplexing system memory. That is because these effects intimately depend upon the results of activity across multiple layers of the vertical execution stack, which are often not available in any individual component.
The goal of our proposed work is to exercise collaboration between the compiler, operating system, and memory controller for a hybrid memory architecture to reduce energy consumption, while balancing performance trade-offs. Analysis, data structure partitioning, and code layout transformations will be conducted by the compiler and two-way communication between the applications and OS will guide memory management. The OS, together with the hardware memory controller, will allocate, map, and migrate pages to minimize energy consumption for a specified performance tolerance.
NIRANJAN SUNDARARAJAN
Study of Balanced and Unbalanced RFID Tags Attached to Charge PumpsWhen & Where:
246 Nichols Hall
Committee Members:
Ken Demarest, ChairDan Deavours
Jim Stiles
Abstract
Ultra High frequency Radio Frequency Identification (UHF RFID) technology has gained wide prominence in recent years. The main drawback of a UHF RFID tag antenna is that it is sensitive to the environment in which it is placed. That is the performance of a RFID tag deteriorates when placed on conductive or dielectric objects. Most UHF RFID antennas use variations of a balanced folded dipole, such as a T-match antenna. In this project, we answer the question, would it be beneficial having an unbalanced version of a T-match antenna (Gamma match antenna) in a RFID tag compared to having a conventional balanced T-match antenna? To test this we analyzed the performance of a gamma match and T-match antenna, when attached to a charge pump, which generally acts as a load for a RFID antenna in a RFID tag. Also, we propose a procedure to find out the best impedance to drive a charge pump and outline a simple procedure to design a balanced T-match antenna for any desirable input impedance. Later, we transform a balanced T-match antenna into a unbalanced Gamma match antenna and tested to see that a Gamma match antenna is able to deliver more power and voltage to a charge pump than a T-match antenna. Finally we validate these results by studying and comparing the Z-parameters of a Gamma match and T-match antenna.
HARIPRASAD SAMPATHKUMAR
A Framework for Information Retrieval and Knowledge Discovery from Online Healthcare Social NetworksWhen & Where:
246 Nichols Hall
Committee Members:
Bo Luo, ChairXue-Wen Chen
Jerzy Grzymala-Busse
Prasad Kulkarni
Jie Zhang
Abstract
Information used to assist biomedical research has largely comprised of data available in published sources like scientific literature or clinical sources like patient health records. Information from such sources, though extensive and organized, is often not readily available due to its proprietary and/or privacy-sensitive nature. Collecting such information through clinical and pharmaceutical studies is expensive and the information is limited to the diversity of people involved in the study. With the growth of Web 2.0, more and more people openly share their health experiences with other similar patients on healthcare related social networks. The data available in these networks can act as a new source that provides for unrestricted, high volume, highly diverse and up-to-date information needed for assisting biomedical and pharmaceutical research. However, this data is often unstructured, noisy and scattered, making it unsuitable for use in its current form. The goal of this research is to develop an Information Retrieval and Knowledge Discovery framework that is capable of automatically collecting such data from online healtcare networks, extracting useful information and representing it in a form that would facilitate knowledge discovery in biomedical and pharmaceutical research. Information retrieval, Text mining and Ontology modeling techniques are employed in building this framework. An Adverse Drug Reaction discovery tool and a patient profiling tool are being developed to demonstrate the utility of this framework.