Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Charles Mohr

Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still despite these advances, in a waveform agile mode where the radar transmits unique waveforms for every pulse or a nonrepeating signal continuously, effective operation can be difficult due the waveform design requirements. In general, for radar waveforms to be both useful and physically robust they must achieve good autocorrelation sidelobes, be spectrally contained, and possess a constant amplitude envelope for high power operation. Meeting these design goals represents a tremendous computational overhead that can easily impede real-time operation and the overall effectiveness of the radar. This work addresses this concern in the context of random FM waveforms (RFM) which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a waveform agile mode. However, while they are effective, the approaches to design these waveforms require optimization of each individual waveform, making them subject to costly computational requirements.

 

This dissertation takes a different approach. Since RFM waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of an underlying stochastic process called a waveform generating function (WGF). This approach enables the convenient generation of spectrally contained RFM waveforms for little more computational cost than pulling numbers from a random number generator (RNG). To do so, this work translates the traditional mathematical treatment of random variables and random processes to a more radar centric perspective such that the WGFs can be analytically evaluated as a function of the usefulness of the radar waveforms that they produce via metrics such as the expected matched filter response and the expected power spectral density (PSD). Further, two WGF models denoted as pulsed stochastic waveform generation (Pulsed StoWGe) and continuous wave stochastic waveform generation (CW-StoWGe) are devised as means to optimize WGFs to produce RFM waveform with good spectral containment and design flexibility between the degree of spectral containment and autocorrelation sidelobe levels for both pulsed and CW modes. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function. The EFTE optimization is shown analytically using the metrics above, as well as others defined in this work and through simulation, to produce WGFs whose sample functions achieve these goals and thus produce useful random FM waveforms. To complete the theory-modeling-experimentation design life cycle, the resultant StoWGe waveforms are implemented in a loop-back configuration and are shown to be amenable to physical implementation.

 


David Menager

Event Memory for Intelligent Agents

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Arvin Agah, Chair
Michael Branicky
Prasad Kulkarni
Andrew Williams
Sarah Robins

Abstract

This dissertation presents a novel theory of event memory along with an associated computational model that embodies the claims of view which is integrated within a cognitive architecture. Event memory is a general-purpose storage for personal past experience. Literature on event memory reveals that people can remember events by both the successful retrieval of specific representations from memory and the reconstruction of events via schematic representations. Prominent philosophical theories of event memory, i.e., causal and simulationist theories, fail to capture both capabilities because of their reliance on a single representational format. Consequently, they also struggle with accounting for the full range of human event memory phenomena. In response, we propose a novel view that remedies these issues by unifying the representational commitments of the causal and simulation theories, thus making it a hybrid theory. We also describe an associated computational implementation of the proposed theory and conduct experiments showing the remembering capabilities of our system and its coverage of event memory phenomena. Lastly, we discuss our initial efforts to integrate our implemented event memory system into a cognitive architecture, and situate a tool-building agent with this extended architecture in the Minecraft domain in preparation for future event memory research.


Yiju Yang

Image Classification Based on Unsupervised Domain Adaptation Methods

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Taejoon Kim, Chair
Andrew Williams
Cuncong Zhong


Abstract

Convolutional Neural Networks (CNNs) have achieved great success in broad computer vision tasks. However, due to the lack of labeled data, many available CNN models cannot be widely used in many real scenarios or suffer from significant performance drop. To solve the problem of lack of correctly labeled data, we explored the capability of existing unsupervised domain adaptation (UDA) methods on image classification and proposed two new methods to improve the performance.

1. An Unsupervised Domain Adaptation Model based on Dual-module Adversarial Training: we proposed a dual-module network architecture that employs a domain discriminative feature module to encourage the domain invariant feature module to learn more domain invariant features. The proposed architecture can be applied to any model that utilizes domain invariant features for UDA to improve its ability to extract domain invariant features. Through the adversarial training by maximizing the loss of their feature distribution and minimizing the discrepancy of their prediction results, the two modules are encouraged to learn more domain discriminative and domain invariant features respectively. Extensive comparative evaluations are conducted and the proposed approach significantly outperforms the baseline method in all image classification tasks.

2. Exploiting maximum classifier discrepancy on multiple classifiers for unsupervised domain adaptation: The adversarial training method based on the maximum classifier discrepancy between the two classifier structures has been applied to the unsupervised domain adaptation task of image classification. This method is straightforward and has achieved very good results. However, based on our observation, we think the structure of two classifiers, though simple, may not explore the full power of the algorithm. Thus, we propose to add more classifiers to the model. In the proposed method, we construct a discrepancy loss function for multiple classifiers following the principle that the classifiers are different from each other. By constructing this loss function, we can add any number of classifiers to the original framework. Extensive experiments show that the proposed method achieves significant improvements over the baseline method.


Idhaya Elango

Detection of COVID-19 cases from chest X-ray images using COVID-NET, a deep convolutional neural network

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni , Chair
Bo Luo
Heechul Yun


Abstract

COVID-19 is caused by the SARS-COV-2 contagious virus. It causes a devastating effect on the health of humans leading to high morbidity and mortality worldwide. Infected patients should be screened effectively to fight against the virus. Chest X-Ray (CXR) is one of the important adjuncts in the detection of visual responses related to SARS-COV-2 infection. Abnormalities in chest x-ray images are identified for COVID-19 patients. COVID-Net a deep convolutional neural network, is used here to detect COVID-19 cases from Chest X-ray images. COVIDX dataset used in this project is generated from five different open data access repositories. COVID-Net makes predictions using an explainability method to gain knowledge into critical factors related to COVID cases. We also perform quantitative and qualitative analyses to understand the decision-making behavior. 


Blake Bryant

A Secure and Reliable Network Latency Reduction Protocol for Real-Time Multimedia Applications

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Bo Luo
Reza Barati

Abstract

Multimedia networking is the area of study associated with the delivery of heterogeneous data including, but not limited to, imagery, video, audio, and interactive content. Multimedia and communication network researchers have continually struggled to devise solutions for addressing the three core challenges in multimedia delivery: security, reliability, and performance. Solutions to these challenges typically exist in a spectrum of compromises achieving gains in one aspect at the cost of one or more of the others. Networked videogames represent the pinnacle of multimedia presented in a real-time interactive format. Continual improvements to multimedia delivery have led to tools such as buffering, redundant coupling of low-resolution alternative data streams, congestion avoidance, and forced in-order delivery of best-effort service; however, videogames cannot afford to pay the latency tax of these solutions in their current state.

This dissertation aims to address these challenges through the application of a novel networking protocol, leveraging emerging technology such as block-chain enabled smart contracts, to provide security, reliability, and performance gains to distributed network applications. This work provides a comprehensive overview of contemporary networking approaches used in delivering videogame multimedia content and their associated shortcomings. Additionally, key elements of block-chain technology are identified as focal points for prospective solution development, notably through the application of distributed ledger technology, consensus mechanisms and smart contracts. Preliminary results from empirical evaluation of contemporary videogame networking applications have confirmed security and performance flaws existing in well-funded AAA videogame titles. Ultimately, this work aims to solve challenges that the videogame industry has struggled with for over a decade.

The broader impact of this research is to improve the real-time delivery of interactive multimedia content. Positive results in the area will have wide reaching effects in the future of content delivery, entertainment streaming, virtual conferencing, and videogame performance.


Alaa Daffalla

Security & Privacy Practices and Threat Models of Activists during a Political Revolution

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Alexandru Bardas, Chair
Fengjun Li
Bo Luo


Abstract

Activism is a universal concept that has often played a major role in putting an end to injustices and human rights abuses globally. Political activism in specific is a modern day term coined to refer to a form of activism in which a group of people come into collision with a more omnipotent adversary - national or international governments - who often has a purview and control over the very telecommunications infrastructure that is necessary for activists in order to organize and operate. As technology and social media use have become vital to the success of activism movements in the twenty first century, our study focuses on surfacing the technical challenges and the defensive strategies that activists employ during a political revolution. We find that security and privacy behavior and app adoption is influenced by the specific societal and political context in which activists operate. In addition, the impact of a social media blockade or an internet blackout can trigger a series of anti-censorship approaches at scale and cripple activists’ technology use. To a large extent the combination of low tech defensive strategies employed by activists were sufficient against the threats of surveillance, arrests and device confiscation. Throughout our results we surface a number of design principles but also some design tensions that could occur between the security and usability needs of different populations. And thus, we present a set of observations that can help guide technology designers and policy makers. 


Chiranjeevi Pippalla

Autonomous Driving Using Deep Learning Techniques

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Suzanne Shontz


Abstract

Recent advances in machine learning (ML), known as deep neural networks (DNN) or deep learning, have greatly improved the state-of-the-art for many ML tasks, such as image classification (He, Zhang, Ren, & Sun, 2016; Krizhevsky, Sutskever, & Hinton, 2012; LeCun, Bottou, Bengio, & Haffner, 1998; Szegedy et al., 2015; Zeiler & Fergus, 2014), speech recognition (Graves, Mohamed, & Hinton, 2013; Hannun et al., 2014; Hinton et al., 2012), complex games and learning from simple reward signals (Goodfellow et al., 2014; Mnih et al., 2015; Silver et al., 2016), and many other areas as well. NN and ML methods have been applied to the task of autonomously controlling a vehicle with only a camera image input to successfully navigate on road (Bojarski et al., 2016). However, advances in deep learning are not yet applied systematically to this task. In this work I used a simulated environment to implement and compare several methods for controlling autonomous navigation behavior using a standard camera input device to sense environmental state. The simulator contained a simulated car with a camera mounted on the top to gather visual data while being operated by a human controller on a virtual driving environment. The gathered data was used to perform supervised training for building an autonomous controller to drive the same vehicle remotely over a local connection. Reproduced past results that have used simple neural networks and other ML techniques to guide similar test vehicles using a camera. Compared these results with more complex deep neural network controllers, to see if they can improve navigation performance based on past methods on measures of speed, distance, and other performance metrics on unseen simulated road driving tasks.


Anna Fritz

Type Dependent Policy Language

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Perry Alexander, Chair
Alex Bardas
Andy Gill


Abstract

Remote attestation is the act of making trust decisions about a communicating party. During this process, an appraiser asks a target to execute an attestation protocol that generates and returns evidence. The appraiser can then make claims about the target by evaluating the evidence. Copland is a formally specified, executable language for representing attestation protocols. We introduce Copland centered negotiation as prerequisite to attestation to find a protocol that meets the target’s needs for constrained disclosure and the appraiser’s desire for comprehensive information. Negotiation begins when the appraiser sends a request, a Copland phrase, to the target. The target gathers all protocols that satisfy the request and then, using their privacy policy, can filter out the phrases that expose sensitive information. The target sends these phrases to the appraiser as a proposal. The appraiser then chooses the best phrase for attestation, based on situational requirements embodied in a selection function. Our focus is statically ensuring the target does not share sensitive information though terms in the proposal, meeting their need for constrained disclosure. To accomplish this, we realize two independent implementation of the privacy and selection policies using indexed types and subset types. In using indexed types, the policy check is accomplishes by indexing the term grammar with the type of evidence the term produces. The statically ensures that terms written in the language will satisfy the privacy policy criteria. In using the subset type, we statically limit the collection of terms to those that satisfy the privacy policy. This type abides by the rules of set comprehension to build a set such that all elements of the set satisfy the privacy policy. Combining our ideas for a dependently typed privacy policy and negotiation, we give the target the chance to suggest a term or terms for attestation that fits the appraiser’s needs while not disclosing sensitive information.


Sahithi Reddy Paspuleti

Real-Time Mask Recognition

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Andrew Gill


Abstract

COVID-19 is a disease that spreads from human to human which can be controlled by ensuring proper use of a facial mask. The spread of COVID-19 can be limited if people strictly maintain social distancing and use a facial mask. Very sadly, people are not obeying these rules properly which is speeding the spread of this virus. Detecting the people not obeying the rules and informing the corresponding authorities can be a solution in reducing the spread of Corona virus. The proposed method detects the face from the image correctly and then identifies if it has a mask on it or not. As a surveillance task performer, it can also detect a face along with a mask in motion. It has numerous applications, such as autonomous driving, education, surveillance, and so on.


Mugdha Bajjuri

Driver Drowsiness Monitoring System

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Andrew Gill


Abstract

Fatigue and microsleep at the wheel are often the cause of serious accidents and death. Fatigue, in general, is difficult to measure or observe unlike alcohol and drugs, which have clear key indicators and tests that are available easily. Hence, detection of driver’s fatigue and its indication is an active research area. Also, I believe that drowsiness can negatively impact people in working and classroom environments as well. Drowsiness in the workplace especially while working with heavy machinery may result in serious injuries similar to those that occur while driving drowsily. The proposed system for detecting driver drowsiness has a webcam that records the video of the driver and driver’s face is detected in each frame employing image processing techniques. Facial landmarks on the detected face are pointed and subsequently the eye aspect ratio, mouth opening ratio and nose length ratio are computed and depending on their values, drowsiness is detected. If drowsiness is detected, a warning or alarm is sent to the driver from the warning system.