Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Md Mashfiq Rizvee

Hierarchical Probabilistic Architectures for Scalable Biometric and Electronic Authentication in Secure Surveillance Ecosystems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
David Johnson
Hongyang Sun
Alexandra Kondyli

Abstract

Secure and scalable authentication has become a primary requirement in modern digital ecosystems, where both human biometrics and electronic identities must be verified under noise, large population growth and resource constraints. Existing approaches often struggle to simultaneously provide storage efficiency, dynamic updates and strong authentication reliability. The proposed work advances a unified probabilistic framework based on Hierarchical Bloom Filter (HBF) architectures to address these limitations across biometric and hardware domains. The first contribution establishes the Dynamic Hierarchical Bloom Filter (DHBF) as a noise-tolerant and dynamically updatable authentication structure for large-scale biometrics. Unlike static Bloom-based systems that require reconstruction upon updates, DHBF supports enrollment, querying, insertion and deletion without structural rebuild. Experimental evaluation on 30,000 facial biometric templates demonstrates 100% enrollment and query accuracy, including robust acceptance of noisy biometric inputs while maintaining correct rejection of non-enrolled identities. These results validate that hierarchical probabilistic encoding can preserve both scalability and authentication reliability in practical deployments. Building on this foundation, Bio-BloomChain integrates DHBF into a blockchain-based smart contract framework to provide tamper-evident, privacy-preserving biometric lifecycle management. The system stores only hashed and non-invertible commitments on-chain while maintaining probabilistic verification logic within the contract layer. Large-scale evaluation again reports 100% enrollment, insertion, query and deletion accuracy across 30,000 templates, therefore, solving the existing problem of blockchains being able to authenticate noisy data. Moreover, the deployment analysis shows that execution on Polygon zkEVM reduces operational costs by several orders of magnitude compared to Ethereum, therefore, bringing enrollment and deletion costs below $0.001 per operation which demonstrate the feasibility of scalable blockchain biometric authentication in practice. Finally, the hierarchical probabilistic paradigm is extended to electronic hardware authentication through the Persistent Hierarchical Bloom Filter (PHBF). Applied to electronic fingerprints derived from physical unclonable functions (PUFs), PHBF demonstrates robust authentication under environmental variations such as temperature-induced noise. Experimental results show zero-error operation at the selected decision threshold and substantial system-level improvements as well as over 10^5 faster query processing and significantly reduced storage requirements compared to large scale tracking.


Fatima Al-Shaikhli

Optical Measurements Leveraging Coherent Fiber Optics Transceivers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad
Alessandro Salandrino
Judy Wu

Abstract

Recent advancements in optical technology are invaluable in a variety of fields, extending far beyond high-speed communications. These innovations enable optical sensing, which plays a critical role across diverse applications, from medical diagnostics to infrastructure monitoring and automotive systems. This research focuses on leveraging commercially available coherent optical transceivers to develop novel measurement techniques to extract detailed information about optical fiber characteristics, as well as target information. Through this approach, we aim to enable accurate and fast assessments of fiber performance and integrity, while exploring the potential for utilizing existing optical communication networks to enhance fiber characterization capabilities. This goal is investigated through three distinct projects: (1) fiber type characterization based on intensity-modulated electrostriction response, (2) coherent Light Detection and Ranging (LiDAR) system for target range and velocity detection through different waveform design, including experimental validation of frequency modulation continuous wave (FMCW) implementations and theoretical analysis of orthogonal frequency division multiplexing (OFDM) based approaches and (3) birefringence measurements using a coherent Polarization-sensitive Optical Frequency Domain Reflectometer (P-OFDR) system.

Electrostriction in an optical fiber is introduced by interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. We demonstrated a novel technique of identifying fiber types through the measurement of intensity modulation induced electrostriction response. As the spectral envelope of electrostriction induced propagation loss is anti-symmetrical, the signal to noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. We show that if the field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius.        

We also present a self-homodyne FMCW LiDAR system based on a coherent receiver. By using the same linearly chirped waveform for both the LiDAR signal and the local oscillator, the self-homodyne coherent receiver performs frequency de-chirping directly in the photodiodes, significantly simplifying signal processing. As a result, the required receiver bandwidth is much lower than the chirping bandwidth of the signal. Simultaneous multi-target of range and velocity detection is demonstrated experimentally. Furthermore, we explore the use of commercially available coherent transceivers for joint communication and sensing using OFDM waveforms.

In addition, we demonstrate a P-OFDR system utilizing a digital coherent optical transceiver to generate a linear frequency chirp via carrier-suppressed single-sideband modulation. This method ensures linearity in chirping and phase continuity of the optical carrier. The coherent homodyne receiver, incorporating both polarization and phase diversity, recovers the state of polarization (SOP) of the backscattered optical signal along the fiber, mixing with an identically chirped local oscillator. With a spatial resolution of approximately 5 mm, a 26 GHz chirping bandwidth, and a 200 us measurement time, this system enables precise birefringence measurements. By employing three mutually orthogonal SOPs of the launched optical signal, we measure relative birefringence vectors along the fiber.


Past Defense Notices

Dates

Zeus Gannon

Designing a SODAR testbed for RADAR applications

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Christopher Allen, Chair
Shannon Blunt
James Stiles


Abstract

In research there exists a need to constantly test and develop systems. Testing a radar system requires costly resources in terms of equipment and spectrum. These challenges relegate most testing to simulations, which are a poor approximation of reality. An alternative to over-the-air radar testing is presented here in the form of an over-the-air ultrasonic detection and ranging (SODAR) system. This system takes advantage of the similar wave-like propagation properties of acoustic and electromagnetic waves. With a SODAR testbed, radar waveform design can quickly move out of simulation and into the real world with minimal overhead. In this thesis, basic and advanced radar sensing techniques are demonstrated with a SODAR setup. Range detection, Doppler sensing, and pulse compression are shown as examples of basic radar concepts. For advanced sensing applications, array-based direction finding and synthetic aperture radar (SAR) are shown.


Usman Sajid

Effective Uni-modal to Multi-modal Crowd Estimation based on Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Taejoon Kim, Chair
Fengjun Li
Bo Luo
Cuncong Zhong
Guanghui Wang

Abstract

Crowd estimation is a vital component of crowd analysis. It finds many applications in real-world scenarios, e.g., huge gatherings management like Hajj, sporting and musical events, or political rallies. Automated crowd counting facilitates better and effective management of such events and consequently prevents any undesired situation. This is a very challenging problem in practice since there exists a significant difference in the crowd number in and across different images, varying image resolution, large perspective, severe occlusions, and dense crowd-like cluttered background regions. Current approaches do not handle huge crowd diversity well and thus perform poorly in cases ranging from extreme low to high crowd-density, thus, yielding huge crowd underestimation or overestimation. Also, manual crowd counting proves to be infeasible due to very slow and inaccurate results. To address these major crowd counting issues and challenges, we investigate two different types of input data: uni-modal (image) and multi-modal (image and audio). 

In the uni-modal setting, we propose and analyze four novel end-to-end crowd counting networks, ranging from multi-scale fusion-based models to uni-scale one-pass and two-pass multi-task networks. The multi-scale networks employ the attention mechanism to enhance the model efficacy. On the other hand, the uni-scale models are well-equipped with novel and simple-yet-effective patch re-scaling module (PRM) that functions identical but is more lightweight than multi-scale approaches. Experimental evaluation demonstrates that the proposed networks outperform the state-of-the-art in majority cases on four different benchmark datasets with up to 12.6% improvement for the RMSE evaluation metric. Better cross-dataset performance also validates the better generalization ability of our schemes. For the multi-modal input, effective feature-extraction (FE) and strong information fusion between two modalities remain a big challenge. Thus, the multi-modal novel network design focuses on investigating different features fusion techniques amid improving the FE. Based on the comprehensive experimental evaluation, the proposed multi-modal network increases the performance under all standard evaluation criteria with up to 33.8% improvement in comparison to the state-of-the-art. The application of multi-scale uni-modal attention networks also proves more effective in other deep learning domains, as demonstrated successfully on seven different scene-text recognition task datasets with better performance.


Giordanno Castro Garcia

pyCatalstReader: Extracting Text and Tokenization of Technical

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Michael Branicky , Chair
Fengjun Li
Bo Luo
Kevin Leonard

Abstract

Catalysts are an essential and ubiquitous component of our modern life, from empowering our agriculture to reducing toxic emissions. There is a constant need for more and better catalysts.  The catalysis research literature is immense, growing, and scattered.   Natural Language Processing (NLP), a sub-field of Machine Learning (ML), offers a potential solution to automatically make full use of all this valuable information and speed innovation. Even though NLP has made much progress in the analysis of everyday text, its application in more technical text has not been as successful.  Specifically, there are even a dearth of tools that can appropriately extract text from the PDF files of research articles, which are the most common format used in the catalyst field. Therefore, this project aims to define a tool that can extract text out PDF files of catalysis science articles, which is prerequisite to applying NLP and ML tools.  We also explore the first stage of the NLP pipeline, tokenization, by objectively comparing different tokenizers for catalysis science articles.


Sai Manudeep Gadde

Landmark Classification and Tagging using Convolutional Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
Michael Branicky
Esam Al-Araby


Abstract

Photo sharing and photo storage services like to have location data for each photo that is uploaded. With the location data, these services can build advanced features, such as automatic suggestion of relevant tags or automatic photo organization, which help provide a compelling user experience. Although a photo's location can often be obtained by looking at the photo's metadata, many photos uploaded to these services will not have location metadata available. This can happen when, for example, the camera capturing the picture does not have GPS or if a photo's metadata is scrubbed due to privacy concerns.

If no location metadata for an image is available, one way to infer the location is to detect and classify a discernible landmark in the image.  Given the large number of landmarks across the world and the immense volume of images that are uploaded to photo sharing services, using human judgement to classify these landmarks would not be feasible. In this project, we aim to address this problem by building models to automatically predict the location of the image based on any landmarks depicted in the image. We will go through the machine learning design process end-to-end: performing data preprocessing, designing and training CNNs, comparing the accuracy of different CNNs, and using some own images to heuristically evaluate the best CNN.


Dalton Brucker-Hahn

Anvil: Flexible and Dynamic Service Mesh Security Design for Microservice Architectures and Future Network Security Research

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Alexandru Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Huazhen Fang

Abstract

Modern cloud computing environments are evolving with a focus upon speed of deployments, frequency of changes, and a greater adoption of microservice architectures.  To handle these high-level business goals, an emerging series of tools and methodologies referred to as DevOps have been adopted to handle the dynamic and flexible environments being employed in enterprise software.  A popular class of tools within the DevOps toolset are service meshes which aim to manage and connect swarms of microservices.  Service meshes are also responsible for providing service discovery and security for the requests and responses occurring between microservices in a deployment.

Previous work has demonstrated several shortcomings and design limitations in existing, state-of-art service meshes.  Due to this, studies focusing upon improving the security and providing dynamic solutions to these challenges have been proposed but fall short of addressing the issue.  This work will propose a novel design to better address the existing challenges and security needs within this domain.  Anvil, a novel, proof-of-concept service mesh will be designed, implemented, and evaluated

with the trade-off of security and performance in mind.  The goal of Anvil is to provide a security-focused service mesh that can be extended and modified as needed for future research efforts involving service meshes and service mesh design.  With flexibility and extensibility as primary design considerations, future research efforts within the domain of zero-trust networking and distributed system security will be explored and evaluated leveraging Anvil as the underlying service mesh infrastructure.  The potential design and security benefits to the domain of microservice architectures by utilizing Anvil as a testbed and platform for security research is immense.


Laurynas Lialys

Near-Infrared Coherent Raman Spectroscopy and Microscopy

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Shima Fardad, Chair
Rongqing Hui
Alessandro Salandrino


Abstract

Coherent Raman Scattering (CRS) spectroscopy and microscopy is a widely used technique in biology, chemistry, and physics to determine the chemical structure as well as provide a label-free image of the sample. The system uses two coherent laser beams one of which is constantly tuned in wavelength. Thus, a tunable laser source or optical parametric oscillator (OPO) is commonly used to achieve this requirement. However, the aforementioned devices are extremely expensive and work only for a specific wavelength range. In this study, we replace an OPO system with a photonic crystal fiber (PCF) in order to significantly reduce the cost and increase the flexibility of our microscopy system. Here, by exploiting the nonlinear phenomenon in the fiber called the soliton self-frequency shift (SSFS), we are able to shift the pulse central frequency while preserving its shape. Also, by switching to a near-infrared (NIR) source, the undesired fluorescence is reduced while the penetration depth increases. Moreover, the NIR laser source is more biologically friendly as each photon carries less energy than the visible laser counterpart. This reduces the probability of the photodamage effect. Based on this system, we designed and implemented CRS microscopy and spectroscopy, using Coherent anti-Stokes Raman Scattering (CARS) and Stimulated Raman Scattering (SRS) spectroscopy techniques. 


Lazarus Sandhagala Francis

Sentiment Analysis for detecting depression through Social Media Posts

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Michael Branicky


Abstract

Depression is a common and serious medical condition that negatively affects how one thinks, feels, and acts. Emotional symptoms of depression include loss of interest and/or sad mood. Lack of hope, a sense of guilt or worthlessness, and recurring thoughts of death or suicide are also reported in some cases. After the recent pandemic, depression rates have increased dramatically. Although depression is a major burden for the healthcare system worldwide, it is treatable. Only 47.3% of mental health cases are detected accurately by professionals as Patient Health Questionnaire is used as a screening tool that is heavily dependent on what the patient can remember from the past few weeks. Considering the challenges Healthcare professionals are facing, we can supply helpful resources to those users who have been detected with any depressive symptoms from their social media posts. As social media platforms have altered our world, most people are now connected than ever and are showing a digital persona. We can use all the user-generated content to help them. Sentiment Analysis, also called opinion mining, is a process of detecting the emotional tone behind any piece of text. It is majorly used to analyze news articles, User-generated content, and the text of research papers. This project aims to create a dataset by scrapping tweets and detecting a probably depressed twitter user based on their tweets using Natural Language Processing techniques. Currently, Social media platforms like Twitter have A.I. systems that flag tweets about misinformation, misleading tweets, or those tweets that violate the site’s terms and conditions. Like that, we can also have a depression detection system that will supply users who are probably exhibiting depressive emotions with helpful articles, images, or videos.


Ashwin Rathore

Wireless Communications for Unmanned Vehicles in the Sky and on the Ground

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
David Johnson
Prasad Kulkarni


Abstract

Given the ever-increasing use of unmanned aerial vehicles (UAV), there are great potentials as well strict requirements for their safe operation in beyond-visual-line-of-sight (BVLOS) environments. Commercial package delivery, emergency services, tracking, inspection, arejust some of those applications. To support these applications under the BVLOS scenarios, a reliable command and control (C2) communication channel with an extended range is needed. To investigate performance of different communication technologies, we use an open-source simulator that integrates the flight simulator ArduPilot with the network simulator NS-3. We implement several flight missions and investigate the performance of 4G cellular network compared with Wi-Fi for establishing the connection between the UAV and groundcontrol station (GCS). Our simulation results demonstrate the benefits of using 4G to satisfy the C2 requirements. Our simulated flight mission consists of multiple UAVs on the same network and also using external interference to observe network performance in terms of average delay, communication range, and received signal strength. In the second part of this project, we explore wireless connectivity between unmanned (autonomous) vehicles on the ground. To this end, we use Amazon’s Deepracer autonomous car that is primarily used for developing and testing machine learning algorithms for multi-vehicle racing, track completion, and obstacle avoidance. We leverage Deepraccer cars to establish peer-to-peer wireless connection between multiple vehicles operating in the same environment. This will enable autonomous vehicles to share crucial information such as positions, velocity, obstacle,and accidents on the way to enhance roads safety.


Gordon Ariho

MULTIPASS SAR PROCESSING FOR ICE SHEET VERTICAL VELOCITY AND TOMOGRAPHY MEASUREMENTS

When & Where:


Nichols Hall, Room 317

Committee Members:

James Stiles, Chair
John Paden
Christopher Allen
Shannon Blunt
Carl Leuschen

Abstract

Ice dynamics are a major factor in ice sheet mass balance and play a huge role in sea level rise (and future sea-level rise projections). Ice velocity measures the direction and rate at which ice is redistributed from the accumulation to the ablation regions of glaciers and ice sheets. We propose to apply multipass differential interferometric synthetic aperture radar (DInSAR) techniques to data from the Multichannel Coherent Radar Depth Sounder (MCoRDS) to measure the vertical displacement of englacial layers within an ice sheet. DInSAR’s accuracy is usually on the order of a small fraction of the wavelength (e.g. millimeter to centimeter precision is common) in monitoring ground displacement along the radar line of sight (LOS).  Unlike ground-based Autonomous phase-sensitive Radio-Echo Sounder (ApRES) units that can be precisely positioned and used to produce vertical velocity fields, airborne systems suffer from unknown baseline errors. In the case of ice sheet internal layers, vertical displacement is estimated by compensating for the spatial baseline using precise trajectory information and estimates of the cross-track layer slope from direction of arrival analysis. The current DInSAR algorithm is applied to radar depth sounder data to produce results for Summit camp in central Greenland and a high accumulation region near Camp Century in northwest Greenland using the CReSIS toolbox. This approach has a drawback arising from the baseline error due to the GPS being estimated after Direction of Arrival (DOA) estimation yet DOA estimation is dependent on the baseline being accurate. We propose to extend this work by implementing a maximum likelihood estimator that jointly estimates the vertical velocity, the cross-track internal layer slope, and the unknown baseline error due to GPS and INS (Inertial Navigation System) errors. The multipass algorithm will be applied to additional flights from the decade long NASA Operation IceBridge airborne mission that flew MCoRDS on many repeated flight tracks. We also propose to improve the accuracy of tomographic swaths produced from multipass measurements and investigate the possibility to use focusing matrices to improve wideband tomographic processing.


Madhu Peduri

Training a Smart cab agent Using a Reinforcement Q – Learning

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Bo Luo


Abstract

Reinforcement learning is a method to map situations to actions to maximize a numerical reward signal. In most forms of machine learning, the model must discover which actions to take unlike reinforcement learning in which the model must discover which actions yield the most reward by trying them. These actions may affect not only the immediate reward, but also the next situation and, through that, all subsequent rewards. This type of learning is different from supervised learning, where domain knowledge comes from an external supervisor. This is an important kind of learning, but alone it is not adequate for learning from interaction. In interactive problems it is often impractical to obtain examples of desired behavior that are both correct and representative of all the situations in which the agent must act and must be able to learn from its own experience. As a part of this project, we attempt to train a Smart-cab agent that will navigate through its environment towards a goal. With following elements as our Reinforcement model, Agent – We use a Car as the agent to interact with the environment. The goal for the agent is to reach the destination with the maximum value; Environment – Our environment is a grid like structure with pathways that represent the roads with cars (5 without the agent) moving along them stochastically; Policy – We have a set of actions and constraints within which states and actions would be mapped. The agent has to perform the appropriate action that results into maximum Q-value. We use the Pygame tool to build our environment to visualize the interaction of the agent with our environment and Q-Learning to find the optimal policy that determines the optimal action that can be taken keeping all the constraints under purview.