Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Sushmitha Boddi Reddy

Conversational AI Chatbots

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Zijun Yao


Abstract

We know that AI and Machine learning are being used in every industry and in every experience. Chatbots are among most visible applications of AI technology and since the time chatbots have entered the digital world, every industry wants to use chatbots to help their customers with the common issues. Most of the chatbots are kind of a messaging interface where instead of humans answering, message bots will be responding.

Chatbots are an artificial intelligence software that can initiate a human like conversation with a real human based on the training.AI uses a natural language to communicate with Artificial Intelligence features embedded in them. The conversation humans have with bots is powered by Machine Learning algorithms which breaks down the messages received into human understandable languages using Natural Language Processing techniques and responds to your queries like what you can expect from any human on the other side. In this project I've trained a chatbot using a dataset which responds to our messages.


Kailani Jones

Metrics Identifying Gaps in the SOC's Alert Handling Process

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Alexandru Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

In the recent years, organization's attack surfaces continue to increase with the rise of data storage, application diversity, and ransomware attacks.

The response typically falls to the enterprise Security Operation Center (SOC). However, even with an expanding attack surface, organizations continue to decommission or completely remove their SOCs due to the uncertainty around their respective value. This work traces and analyzes (1) the SOC's effort to reimpose endpoint monitoring and content handling as a result of the Internet's new and different sociotechnical environment resulting from COVID-19's “work from home” and (2) propose a metrics framework that captures the gaps within the SOC analysts' core function: the alert handling process. By intersecting historical analysis (starting in the 1970s) and ethnography (analyzed 256 field notes and performed two rounds of semi-structured interviews across 770+ hours in a SOC over 26 months) whilst complementing with quantitative interviews (covering 7 other SOCs), we find additional causal forces that, for decades, have pushed network management toward endpoints and content. With a similar ethnographic approach (participation observation paired with semi-structured interviews), we further locate expert judgement in the alert handling process and utilize those limitations as key performance indicators to identify gaps and capture the needs within the SOC.


Likitha Vemulapalli

Identification of Foliar Diseases in Plants using Deep Learning Techniques

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Suzanne Shontz


Abstract

Artificial Intelligence has been gathering tremendous support lately by bridging the gap between humans and machines. Amazing discoveries in numerous fields are paving way for state-of-the-art technologies. Deep Learning has shown immense progress in the field of computer vision in recent years, with neural networks repeatedly pushing the frontier of visual recognition technology. Recent years have witnessed an exponential increase in the use of mobile and embedded devices. With great success of deep learning, there is an emerging trend to deploy deep learning models on mobile and embedded devices. However, it is not a simple task, the limited resources of mobile and embedded devices make it challenging to fulfill the intensive computation and storage demand of deep learning models and state-of-the-art Convolutional Neural Networks (CNN) require computation at billions of floating-point operations per second (FLOP) which inhibit them from being utilized in mobile and embedded devices. Mobile convolutional neural networks use depth wise and group convolutions rather than standard “fully-connected” convolutions.  In this project we will be applying mobile convolutional models to identify the diseases in plants. Plant diseases are responsible for serious economic losses every year. Due to various reasons, the crops are affected based on climate conditions, various kinds of diseases, heavy usage of pesticides and many other factors. Due to the rise in use of pesticides, the farmers are experiencing irreplaceable losses. Less use of pesticides can help in better crop production. Using these mobile CNNs we can identify the diseases in plants with leaf images and based on the type of disease pesticides can be used respectively. The main goal is to use an efficient model which can assist farmers in recognizing leaf symptoms and providing targeted information for rational use of pesticides. 


Truc Anh Ngoc Nguyen

ResTP: A Configurable and Adaptable Multipath Transport Protocol for Future Internet Resilience

When & Where:


2001B Eaton Hall

Committee Members:

Victor Frost, Chair
Morteza Hashemi
Taejoon Kim
Bo Luo
Hyunjin Seo

Abstract

Motivated by the shortcomings of common transport protocols, e.g., TCP, UDP, and MPTCP, in modern networking and the belief that a general-purpose transport-layer protocol, which can operate efficiently over diverse network environments while being able to provide desired services for various application types, we design a new transport protocol, ResTP. The rapid advancement of networking technology and use paradigms is continually supporting new applications. The configurable and adaptable multipath-capable ResTP is not only distinct from the standard protocols by its flexibility in satisfying the requirements of different traffic classes considering the characteristics of the underlying networks, but by its emphasis on providing resilience. Resilience is an essential property that is unfortunately missing in the current Internet. In this dissertation, we present the design of ResTP, including the services that it supports and the set of algorithms that implement each service. We also discuss our modular implementation of ResTP in the open-source network simulator ns-3. Finally, the protocol is simulated under various network scenarios, and the results are analyzed in comparison with conventional protocols such as TCP, UDP, and MPTCP to demonstrate that ResTP is a promising new transport-layer protocol providing resilience in the Future Internet (FI).


Dinesh Mukharji Dandamudi

Analyzing the short squeeze caused by Reddit community by Using Machine learning

When & Where:


Zoom defense, please email jgrisafe@ku.edu for the meeting information

Committee Members:

Matthew Moore, Chair
Drew Davidson
Cuncong Zhong


Abstract

Algorithmic trading (sometimes termed automated trading, black-box trading, or algo-trading) is a computerized trading system where a computer program follows a set of specified instructions to make a transaction. Theoretically, the transaction should allow traders to make profits at a rate and frequency that a human trader cannot attain. Algorithmic trading is an automated trading method that is carried out using a computer algorithm. Trade theory theoretically posits that humans cannot earn profits at a pace and frequency comparable to those generated by computers.  

 

Traders have a tough time keeping track of the many handles that originate data. NLP (Natural Language Processing) can be used to rapidly scan various news sources, identifying opportunities to gain an advantage before other traders do. 

 

Based on this background, this project aims to select and implement an NLP and Machine Learning process that produces an algorithm, which can detect OR predict the future value from scraped data using Natural language processing and Machine Learning. This algorithm builds the basic structure for an approach to evaluate these documents. 


Lyndon Meadow

Remote Lensing

When & Where:


2001B Eaton Hall

Committee Members:

Matthew Moore, Chair
Perry Alexander
Prasad Kulkarni


Abstract

The problem of the manipulation of remote data is typically solved used complex methods to guarantee consistency. This is an instance of the remote bidirectional transformation problem. From the inspiration that several versions of this problem have been addressed using lenses, we now extend this technique of lenses to the Remote Procedure Calls setting, and provide a few simple example implementations.

    Taking the host side to be the strongly-typed language with lensing properties, and the client side to be a weakly-typed language with minimal lensing properties, this work contributes to the existing body of research that has brought lenses from the realm of math to the space of computer science. This shall give a formal look on remote editing of data in type safety with Remote Monads and their local variants.


Chanaka Samarathungage

NextG Wireless Networks: Applications of the Millimeter Wave Networking and Integration of UAVs with Cellular Systems

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
Taejoon Kim
Erik Perrins


Abstract

Considering the growth of wireless and cellular devices and applications, the spectrum-rich millimeter wave (mmWave) frequencies have the potential to alleviate the spectrum crunch that wireless and cellular operators are already experiencing. However, there are several challenges to overcome when using mmWave bands. Since mmWave frequencies have small wavelengths compared to sub-6 GHz bands, most objects such as human body, cause significant additional path losses, which can entirely break the link. Highly directional mmWave links are susceptible to frequent link failures in such environments. Limited range of communication is another challenge in mmWave communications. In this research, we demonstrate the benefits of multi-hop routing in mitigating the blockage and extending communication range in the mmWave band. We develop a hop-by-hop multi-path routing protocol that finds one primary and one backup next-hop per destination to guarantee reliable and robust communication under extreme stress conditions. We also extend our solution by proposing a proactive route refinement scheme for AODV and Backpressure routing protocols under dynamic scenarios.
In the second part, the integration of Unmanned Aerial Vehicles (UAVs) to the NextG cellular systems is considered for various applications such as commercial package delivery, public health and safety, surveying, and inspection, to name a few. We present network simulation results based on 4G and 5G technologies using raytracing software. Based on the results, we propose several network adjustments to optimize 5G network operation for the ground users as well as the UAV users.


Wenchi Ma

Object Detection and Classification based on Hierarchical Semantic Features and Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Bo Luo, Chair
Taejoon Kim
Prasad Kulkarni
Cuncong Zhong
Guanghui Wang

Abstract

The abilities of feature learning, semantic understanding, cognitive reasoning, and model generalization are the consistent pursuit for current deep learning-based computer vision tasks. A variety of network structures and algorithms have been proposed to learn effective features, extract contextual and semantic information, deduct the relationships between objects and scenes, and achieve robust and generalized model. Nevertheless, these challenges are still not well addressed. One issue lies in the inefficient feature learning and propagation, static single-dimension semantic memorizing, leading to the difficulty of handling challenging situations, such as small objects, occlusion, illumination, etc. The other issue is the robustness and generalization, especially when the data source has diversified feature distribution.  

The study aims to explore classification and detection models based on hierarchical semantic features ("transverse semantic" and "longitudinal semantic"), network architectures, and regularization algorithm, so that the above issues could be improved or solved. (1) A detector model is proposed to make full use of "transverse semantic", the semantic information in space scene, which emphasizes on the effectiveness of deep features produced in high-level layers for better detection of small and occluded objects. (2) We also explore the anchor-based detector algorithm and propose the location-aware reasoning (LAAR), where both the location and classification confidences are considered for the bounding box quality criterion, so that the best-qualified boxes can be picked up in Non-Maximum Suppression (NMS). (3) A semantic clustering-based deduction learning is proposed, which explores the "longitudinal semantic", realizing the high-level clustering in the semantic space, enabling the model to deduce the relations among various classes so as better classification performance is expected. (4) We propose the near-orthogonality regularization by introducing an implicit self-regularization to push the mean and variance of filter angles in a network towards 90° and 0° simultaneously, revealing it helps stabilize the training process, speed up convergence and improve robustness. (5) Inspired by the research that self-attention networks possess a strong inductive bias which leads to the loss of feature expression power, the transformer architecture with mitigatory attention mechanism is proposed and applied with the state-of-the-art detectors, verifying the superiority of detection enhancement. 


Sai Krishna Teja Damaraju

Strabospot 2

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Drew Davidson, Chair
Prasad Kulkarni
Douglas Walker


Abstract

Geology is a data-intensive field, but much of its current tooling is inefficient, labor intensive and tedious. While software solutions are a natural solution to these issues, careful consideration of domain-specific needs is required to make such a solution useful. Geology involves field work, collaboration, and a complex hierarchical data structure management to organize the data being captured.

 

    Strabospot was designed to address the above considerations. Strabospot is an application that helps earth scientists capture data, digitize it, and make it available over the world wide web for further research and development. Strabospot is a highly portable, effective, and efficient solution which can transform the field of Geology, affecting not only how the data is captured but also how that data can be further processed and analyzed. The initial implementation of Strabospot, while an important step forward in the field, has several limitations that necessitate a complete rewrite in the form of a second version, Strabospot 2.

 

    Strabospot 2 is a major software undertaking being developed at the University of Kansas through a collaboration between the Department of Geology and the Department of Electrical Engineering and Computer Sciences. This project elaborates on how Strabospot 2 helps the Geologists on the field, what challenges Geologists had with Strabospot and how Strabospot 2 fills in the deficits of Strabospot 1. Strabospot 2 is a large, multi-developer project. This project report focuses on the features implemented by the report author.


Patrick McNamee

Machine Learning for Aerospace Applications using the Blackbird Dataset

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Michael Branicky, Chair
Prasad Kulkarni
Ronald Barrett


Abstract

There is currently much interest in using machine learning (ML) models for vision-based object detection and navigation tasks in autonomous vehicles. For unmanned aerial vehicles (UAVs), and particularly small multi-rotor vehicles such as quadcopters, these models are trained on either unpublished data or within simulated environments, which leads to two issues: the inability to reliably reproduce results, and behavioral discrepancies on physical deployments resulting from unmodeled dynamics in the simulation environment. To overcome these issues, this project uses the Blackbird Dataset to explore integration of ML models for UAV. The Blackbird Dataset is overviewed to illustrate features and issues before investigating possible ML applications. Unsupervised learning models are used to determine flight-test partitions for training supervised deep neural network (DNN) models for nonlinear dynamic inversion. The DNN models are used to determine appropriate model choices over several network parameters including network layer depth, activation functions, epochs for training, and neural network regularization.