Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Rohit Banerjee

Extraction and Analysis of Amazon Reviews

When & Where:


246 Nichols Hall

Committee Members:

Fengjun Li, Chair
Man Kong
Bo Luo


Abstract

Amazon.com is one of the largest online retail stores in the world. Besides selling millions of product on their website, Amazon provides a variety of Web services including Amazon Review and Recommendation System. Users are encouraged to write product reviews to help others to understand products’ features and make purchase decisions. However, product reviews, as a type of user generated content (UGC), suffer from quality and trust problems. To help evaluating the quality of reviews, Amazon also provides the users with the helpfulness vote feature so that a user can support a review that he considers helpful. In this project we aim to study the relation between helpfulness votes and the ranks of the reviews. In particular, we are looking for answers to questions such as “how does the helpfulness votes affect review ranks?” and “how review rank and its presentation mechanism affect people’s voting behavior?” To investigate on these questions, we built a crawler to collect reviews and votes of reviews from Amazon at a daily basis. Then, we conducted an analysis on a dataset with over 50,000 Amazon reviews to identify the voting patterns and their impact on the review ranks. Our results show that there exists a positive correlation between the review ranks and the helpfulness votes.​


BIJAL PARIKH

A Comparison of Tolerance Relation and Valued Tolerance Relation for Incomplete Datasets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

Rough set theory is a popular approach for decision rule induction. However, it requires the objects in the information system to be completely described. Many real life data sets are incomplete, so we cannot apply directly rough set theory for rule induction. This project implements and compares two generalizations of rough set theory, used for rule induction from incomplete data: Tolerance Relation and Valued Tolerance Relation. A comparative analysis is conducted for the lower and upper approximations and decision rules induced by the two methods. Our experiments show that Valued Tolerance Relation provides better approximations than Simple Tolerance Relation when the percentage of missing attribute values in the datasets is high.


Bijal Parikh

A Comparison of Tolerance Relation and Valued Tolerance Relation for Incomplete Datasets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

Rough set theory is a popular approach for decision rule induction. However, it requires the objects in the information system to be completely described. Many real life data sets are incomplete, so we cannot apply directly rough set theory for rule induction. This project implements and compares two generalizations of rough set theory, used for rule induction from incomplete data: Tolerance Relation and Valued Tolerance Relation. A comparative analysis is conducted for the lower and upper approximations and decision rules induced by the two methods. Our experiments show that Valued Tolerance Relation provides better approximations than Simple Tolerance Relation when the percentage of missing attribute values in the datasets is high.


ALHANOOF ALTHNIAN

Evolutionary Learning of Goal-Driven Multi-Agent Communication

When & Where:


2001B Eaton Hall

Committee Members:

Arvin Agah, Chair
Prasad Kulkarni
Fengjun Li
Bo Luo
Elaina Sutley

Abstract

Multi-agent systems are a common paradigm for building distributed systems in different domains such as networking, health care, swarm sensing, robotics, and transportation. Systems are usually designed or adjusted in order to reflect the performance trade-offs made according to the characteristics of the mission requirement. 
Research has acknowledged the crucial role that communication plays in solving many performance problems. Conversely, research efforts that address communication decisions are usually designed and evaluated with respect to a single predetermined performance goal. This work introduces Goal-Driven Communication, where communication in a multi-agent system is determined according to flexible performance goals. 
This work proposes an evolutionary approach that, given a performance goal, produces a communication strategy that can improve a multi-agent system’s performance with respect to the desired goal. The evolved strategy determines what, when, and to whom the agents communicate. The proposed approach further enables tuning the trade-off between the performance goal and communication cost, to produce a strategy that achieves a good balance between the two objectives, according the system designer’s needs. 


JYOTI GANGARAJU

A Laboratory Manual for an Introduction to Communication Systems Course

When & Where:


2001B Eaton Hall

Committee Members:

Victor Frost, Chair
Dave Petr
Glenn Prescott


Abstract

Communication systems laboratory is a hands-on way to effectively visualize the real life applications of communication systems in its simplest form. Recently, hardware equipment such as spectrum analyzer, oscilloscope, and function generator were replaced by Pico Scope 6, a software based data analyzer. The Pico Scope 6 is a user friendly software which enables its users to capture and analyze analog and digital signals with a comparatively higher accuracy. Additionally, it is an economically viable solution, from both the procurement and maintenance stand point. The current effort focuses on developing a laboratory user manual, based on Pico Scope 6, for undergraduates of the Department of Electrical Engineering and Computer Science (EECS). The series of laboratory exercises developed follows the course outline of Introduction to Communication Systems – EECS 562. The expected outcomes of this laboratory manual is an improved understanding of analog modulations, digital modulations, and noise analysis of communication systems.


ARNESH BOSE

Two-Stage Operational Amplifier using MOSFET CMOS Technology

When & Where:


2001B Eaton Hall

Committee Members:

Yang Yi, Chair
Ron Hui
Jim Stiles


Abstract

The operational amplifier is perhaps the most useful integrated device in existence today. It is widely used in analogue computers simulation systems and in a variety of electronic applications such as amplification filtering, buffering and comparison of signed levels. In this design project, we use the operational amplifier for amplification. Two-stage opamp is one of the most commonly used opamp architectures. A two stage differential amplifier is designed with an objective of a minimum gain of 65 dB. The gain achieved is 74.6 dB and 71.4 MHz 3dB gain bandwidth, which is useful for medium frequency operations. The schematic circuit is constructed using Metal Oxide Semiconductor Field Effect Transistor and the technology used for the final layout is Complementary metal–oxide–semiconductor (CMOS) using Cadence.


ISHA KHADKA

Multi-Controller SDN for Fault-Tolerant Resilient Network

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Fengjun Li
Gary Minden


Abstract

Software Defined Networking (SDN) decouples the control or logical plane of a network from its physical/data plane thus enabling features such as centralized control, network programmability, virtualization, network application development, automation and more. However, SDN is still vulnerable to attacks and failures just like any other non-SDN network. The failure in SDN can be either a link or device failure. Controller is the central device, acting like the brain of a network, and its failure can propagate rapidly rendering the underlying data plane dysfunctional. The concept of Multi-Controller SDN uses redundancy as an effective method to ensure resilience and fault-tolerance in a Software-Defined Network. Multiple Controllers are connected in a cluster to form a physically distributed but logically centralized network. The backup controllers ensure resilience against failure, attack, disaster and other network disruptions. In this project, we implement multi-controller SDN and measure performance metrics such as high availability, reliability, latency, datastore persistency and failure recovery time in a clustered environment.


MD AMIMUL EHSAN

Enabling Technologies for Three-dimensional (3D) Integrated Circuits (ICs): Through Silicon Via (TSV) Modeling and Analysis

When & Where:


246 Nichols Hall

Committee Members:

Yang Yi, Chair
Chris Allen
Ron Hui
Lingjia Liu
Judy Wu

Abstract

Three-dimensional (3D) integrated circuits (ICs) offer a promising near-term solution for pushing beyond Moore’s Law because of their compatibility with current technology. Through silicon vias (TSVs) provide electrical connections that pass vertically through wafers or dies to generate high-performance interconnects, which allows for higher design densities through shortened connection lengths. In recent years, we have seen tremendous technological and economic progress in adoption of 3D ICs with TSVs for mainstream commercial use. 
Along with the need for low-cost and high-yield process technology, the successful application of TSV technology requires further optimization of the TSV electrical modeling and design. In the millimeter wave (mmW) frequency range, the root mean square (rms) height of the through silicon via (TSV) sidewall roughness is comparable to the skin depth and hence becomes a critical factor for TSV modeling and analysis. The impact of TSV sidewall roughness on electrical performance, such as the loss and impedance alteration in the mmW frequency range, is examined and analyzed. The second order small analytical perturbation method is applied to obtain a simple closed-form expression for the power absorption enhancement factor of the TSV. In this study, we propose an accurate and efficient electrical model for TSVs which considers the TSV sidewall roughness effect, the skin effect, and the metal oxide semiconductor (MOS) effect. The accuracy of the model is validated through a comparison of circuit model behavior for full wave electromagnetic field simulations up to 100 GHz. 
Another advanced neurophysiological computing system that can incorporate 3D integration could provide massive parallelism with fast and energy efficient links. While the 3D neuro-inspired system offers a fantastic level of integration, it becomes inordinately arduous for the designer to model, merely because of the innumerable interconnected elements. When a TSV array is utilized in a 3D neuromorphic system, crosstalk has a malefic effect upon the system’s signal to noise ratio; the result is an overall deterioration of system performance. To countervail the crosstalk, we propose a novel optimized TSV array pattern by applying the force directed optimization algorithm. 


ADAM PETZ

A Semantics for Attestation Protocols using Session Types in Coq

When & Where:


246 Nichols Hall

Committee Members:

Perry Alexander, Chair
Andy Gill
Prasad Kulkarni


Abstract

As our world becomes more connected, the average person must place more trust in cloud systems for everyday transactions. We rely on banks and credit card services to protect our money, hospitals to conceal and selectively disclose sensitive health information, and government agencies to protect our identity and uphold national security interests. However, establishing trust in remote systems is not a trivial task, especially in the diverse, distributed ecosystem of todays networked computers. Remote Attestation is a mechanism for establishing trust in a remotely running system where an appraiser requests information from a target that can be used to evaluate its operational state. The target responds with evidence providing configuration information, run-time measurements, and authenticity meta-evidence used by the appraiser to determine if it trusts the target system. For Remote Attestation to be applied broadly, we must have attestation protocols that perform operations on a collection of applications, each of which must be measured differently. Verifying that these protocols behave as expected and accomplish their diverse attestation goals is a unique challenge. An important first step is to understand the structural properties and execution patterns they share. In this thesis I present a semantic framework for attestation protocol execution within the Coq verification environment including a protocol representation based on Session Types, a dependently typed model of perfect cryptography, and an operational execution semantics. The expressive power of dependent types constrains the structure of protocols and supports precise claims about their behavior. If we view attestation protocols as programming language expressions, we can borrow from standard language semantics techniques to model their execution. The proof framework ensures desirable properties of protocol execution, such as progress and termination, that hold for all protocols. It also ensures properties of authenticity and secrecy for individual protocols.


RACHAD ATAT

Communicating over Internet Things: Security, Energy-Efficiency, Reliability and Low-Latency

When & Where:


250 Nichols Hall

Committee Members:

Lingjia Liu, Chair
Yang Yi
Shannon Blunt
Jim Rowland
David Nualart

Abstract

The Internet of Things (IoT) is expected to revolutionize the world through its myriad applications in health-care, public safety, environmental management, vehicular networks, industrial automation, etc. Some of the concepts related to IoT include Machine Type Communications (MTC), Low power Wireless Personal Area Networks (LoWPAN), wireless sensor networks (WSN) and Radio-Frequency Identification (RFID). Characterized by large amount of traffic with smart decision making with little or no human interaction, these different networks pose a set of challenges, among which security, energy, reliability and latency are the most important ones. First, the open wireless medium and the distributed nature of the system introduce eavesdropping, data fabrication and privacy violation threats. Second, the large number of IoT devices are expected to operate in a self-sustainable and self-sufficient manner without degrading system performance. That means energy efficiency is critical to prolong devices' lifetime. Third, many IoT applications require the information to be successfully transmitted in a reliable and timely manner, such as emergency response and health-care scenarios. To address these challenges, we propose low-complexity approaches by exploiting the physical layer and using stochastic geometry as a powerful tool to accurately model the spatial locations of ''things''. This helps provide a tractable analytical framework to provide solutions for the mentioned challenges of IoT.