Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

Ashish Adhikari

Towards Assessing the Security of Program Binaries

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Fengjun Li
Sumaiya Shomaji


Abstract

Software vulnerabilities, stemming from coding weaknesses and poor development practices, have become increasingly prevalent. These vulnerabilities could be exploited by attackers to pose risks to the confidentiality, integrity, and availability of software. To protect themselves, end-users of software may have an interest in knowing if the software they buy and use is secure from such attacks. Our work is motivated by this need to automatically assess and rate the security properties of binary software.

To increase user trust in third-party software, researchers have devised several techniques and tools to identify and mitigate coding weaknesses in binary software. Therefore, our first task in this work is to assess the current landscape and comprehend the capabilities and challenges faced by binary-level techniques aimed at detecting critical coding weaknesses in software binaries. We categorize the most important coding weaknesses in compiled programming languages, and conduct a comprehensive survey, exploration, and comparison of static techniques designed to locate these weaknesses in software binaries. Furthermore, we perform an independent assessments of the efficacy of open-source tools using standard benchmarks.

Next, we develop techniques to assess if secure coding principles were adopted during the generation of the software binary. Towards this goal, we first develop techniques to determine the high-level source language used to produce the binary. Then, we check the feasibility of detecting the use of secure coding best practices during code development. Finally, we check the feasibility of detecting the vulnerable regions of code in any binary executable. Our ultimate future goal is to employ all of our developed techniques to rate the security-quality of the given binary software.


Hunter Glass

MeshMapper: Creating a Bluetooth Mesh Communication Network

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li


Abstract

With threat actors ever evolving, the need for secure communications continues to grow. By using non-traditional means as a way of a communication network, it is possible to securely communicate within a region using the bluetooth mesh protocol. The goal is to automatically place these mesh devices in a defined region in order to ensure the integrity and reliability of the network, while also ensuring the least number of devices are placed. By placing a provisioner node, the rest of the specified region populates with mesh nodes that act as relays, creating a network allowing users to communicate within. By utilizing Dijkstra’s algorithm, it is possible to calculate the Time to Live (TTL) between two given nodes in the network, which is an important metric as it directly affects how far apart two users can be within the region. When placing the nodes, a range for the nodes being used is specified and accounted for, which impacts the number of nodes needed within the region. Results show that when nodes are placed at coordinate points given by the generated map, users are able to communicate effectively across the specified region. In this project, a web interface is created in order to allow a user to specify the TTL, range, and the number of nodes to use, and proceeds to place each device within the region drawn by the user.


Abdul Baseer Mohammed

Enhancing Parameter-Efficient Fine-Tuning of Large Language Models with Alignment Adapters and LoRA

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Large Language Models (LLMs) have become integral to natural language processing, involving initial broad pretraining on generic data followed by fine-tuning for specific tasks or domains. While advancements in Parameter Efficient Fine-Tuning (PEFT) techniques have made strides in reducing resource demands for LLM fine-tuning, they possess individual constraints. This project addresses the challenges posed by PEFT in the context of transformers architecture for sequence-to-sequence tasks, by integrating two pivotal techniques: Low-Rank Adaptation (LoRA) for computational efficiency and adaptive layers for task-specific customization. To overcome the limitations of LoRA, we introduce a simple yet effective hyper alignment adapter, that leverages a hypernetwork to generate decoder inputs based on encoder outputs, thereby serving as a crucial bridge to improve alignment between the encoder and the decoder. This fusion strikes a balance between the fine-tuning complexity and task performance, mitigating the individual drawbacks while improving the encoder-decoder alignment. As a result, we achieve more precise and contextually relevant sequence generation. The proposed solution improves the overall efficiency and effectiveness of LLMs in sequence-to-sequence tasks, leading to better alignment and more accurate output generation.


Laurynas Lialys

Engineering Laser Beams for Particle Trapping, Lattice Formation and Microscopy

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Shima Fardad, Chair
Morteza Hashemi
Rongqing Hui
Alessandro Salandrino
Xinmai Yang

Abstract

Having control over nano- and micro-sized objects' position inside a suspension is crucial in many applications such as: trapping and manipulating microscopic objects, sorting particles and living microorganisms, and building microscopic size 3D crystal structures and lattices. This control can be achieved by judiciously engineering optical forces and light-matter interactions inside colloidal suspensions that result in optical trapping. However, in the current techniques, to confine and transport particles in 3D, the use of high NA (Numerical Aperture) optics is a must. This in turn leads to several disadvantages such as alignment complications, narrow field of view, low stability values, and undesirable thermal effects. Hence, here we study a novel optical trapping method that we named asymmetric counter-propagating beams where optical forces are engineered to overcome the aforementioned limitations of existing methods. This novel system is significantly easier to align due to its utilization of much lower NA optics in combination with engineered beams which create a very flexible manipulating system. This new approach allows the trapping and manipulation of different shape objects, sizing from tens of nanometers to hundreds of micrometers by exploiting asymmetrical optical fields with high stability. In addition, this technique also allows for significantly larger particle trapping volumes. As a result, we can apply this method to trapping much larger particles and microorganisms that have never been trapped optically before as well as building 3D lattices and crystal structures of microscopic-sized particles. Finally, this novel approach allows for the integration of a variety of spectroscopy and microscopy techniques, such as light-sheet fluorescence microscopy, to extract time-sensitive information and acquire images with detailed features from trapped entities.


Elise McEllhiney

Self-Training Autonomous Driving System Using An Advantage-Actor-Critic Model

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Victor Frost, Chair
Prasad Kulkarni
Bo Luo


Abstract

We describe an autonomous driving system that uses reinforcement learning to train a car to drive without the need for collecting training input from human drivers.  We achieve this by using the Advantage Actor Critic reinforcement system that trains the car based on continuously adapting the model to minimize the penalty received by the car.  This penalty is determined if the car intersected the borders of the track on which it is driving.  We show the resilience of the proposed autonomously trained system to noisy sensor inputs and variations in the shape of the track.


Shravan Kaundinya

Design, development, and calibration of a high-power UHF radar with a large multichannel antenna array

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Chris Allen
John Paden
James Stiles
Richard Hale

Abstract

The Center for Oldest Ice Exploration (COLDEX) is an NSF-funded multi-institution collaboration to explore Antarctica for the oldest possible continuous ice record. It comprises of exploration and modelling teams that are using instruments like radars, lidars, gravimeters, and magnetometers to select candidate locations to collect a continuous 1.5-million-year ice core. To assist in this search for old ice, the Center for Remote Sensing and Integrated Systems (CReSIS) at the University of Kansas developed a new airborne higher-power version of the 600-900 MHz Accumulation Radar with a much larger multichannel cross-track antenna array. The fuselage portion of the antenna array is a 64-element 0.9 m by 3.8 m array with 4 elements in along-track and 16 elements in cross-track. Each element is a dual-polarized microstrip antenna and each column of 4 elements is power combined into a single channel resulting in 16 cross-track channels. Power is transmitted across 4 cross-track channels on either side of the fuselage array alternatingly to produce a total peak power of 6.4 kW (before losses). Three additional antennas are integrated on each wing to lengthen the antenna aperture. A novel receiver concept is developed using limiters to compress the dynamic range to simultaneously capture the strong ice surface and weak ice bottom returns. This system was flown on a Basler aircraft at the South Pole during the 2022-2023 Austral Summer season and will be flown again during the upcoming 2023-2024 season for repeat interferometry. This work describes the current radar system design and proposes to develop improvements to the compact, high-power divider and large multichannel polarimetric array used by the radar. It then proposes to develop and implement a system engineering perspective on the calibration of this multi-pass imaging radar.


Bahozhoni White

Alternative “Bases” for Gradient Based Optimization of Parameterized FM Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles

Abstract

Even for a fixed time-bandwidth product there are infinite possible spectrally-shaped random FM (RFM) waveforms one could generate due to their being phase-continuous. Moreover, certain RFM classes rely on an imposed basis-like structure scaled by underlying parameters that can be optimized (e.g. gradient descent and greedy search have been demonstrated). Because these structures must include oversampling with respect to 3-dB bandwidth to account for sufficient spectral roll-off (necessary to be physically realizable in hardware), they are not true bases (i.e. not square). Therefore, any individual structure cannot represent all possible waveforms, with the waveforms generated by a given structure tending to possess similar attributes. Unless of course we consider over-coded polyphaser-coded FM (PCFM), which increases the number of elements in the parameter vector, while maintaining the relationship between waveform samples and the time-bandwidth product. Which presents the potential for a true bases, if there is a constraint either explicit or implicit that will constrain the spectrum. Here we examine waveforms possessing different attributes, as well as the potential for a true basis which may inform their selection for given radar applications.


Michael Talaga

A Computer Vision Application for Vehicle Collision and Damage Detection

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hongyang Sun, Chair
David Johnson, Co-Chair
Zijun Yao


Abstract

During the car insurance claims process after an accident has occurred, a vehicle must be assessed by a claims adjuster manually. This process will take time and often results in inaccuracies between what a customer is paid and what the damages actually cost. Separately, companies like KBB and Carfax rely on previous claims records or untrustworthy user input to determine a car’s damage and valuation. Part of this process can be automated to determine where exterior vehicle damage exists on a vehicle. 

In this project, a deep-learning approach is taken using the MaskR-CNN model to train on a dataset for instance segmentation. The model can then outline and label instances on images where vehicles have dents, scratches, cracks, broken glass, broken lamps, and flat tires. The results have shown that broken glass, flat tires, and broken lamps are much easier to locate than the remaining categories, which tend to be smaller in size. These predictions have an end goal of being used as an input for damage cost prediction. 


Michael Talaga

A Computer Vision Application for Vehicle Collision and Damage Detection

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hongyang Sun, Chair

Zijun Yao


Abstract

During the car insurance claims process after an accident has occurred, a vehicle must be assessed by a claims adjuster manually. This process will take time and often results in inaccuracies between what a customer is paid and what the damages actually cost. Separately, companies like KBB and Carfax rely on previous claims records or untrustworthy user input to determine a car’s damage and valuation. Part of this process can be automated to determine where exterior vehicle damage exists on a vehicle. 

In this project, a deep-learning approach is taken using the MaskR-CNN model to train on a dataset for instance segmentation. The model can then outline and label instances on images where vehicles have dents, scratches, cracks, broken glass, broken lamps, and flat tires. The results have shown that broken glass, flat tires, and broken lamps are much easier to locate than the remaining categories, which tend to be smaller in size. These predictions have an end goal of being used as an input for damage cost prediction. 


Michael Talaga

A Computer Vision Application for Vehicle Collision and Damage Detection

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hongyang Sun, Chair
David Johnson (Co-Chair)
Zijun Yao


Abstract

During the car insurance claims process after an accident has occurred, a vehicle must be assessed by a claims adjuster manually. This process will take time and often results in inaccuracies between what a customer is paid and what the damages actually cost. Separately, companies like KBB and Carfax rely on previous claims records or untrustworthy user input to determine a car’s damage and valuation. Part of this process can be automated to determine where exterior vehicle damage exists on a vehicle. 

In this project, a deep-learning approach is taken using the MaskR-CNN model to train on a dataset for instance segmentation. The model can then outline and label instances on images where vehicles have dents, scratches, cracks, broken glass, broken lamps, and flat tires. The results have shown that broken glass, flat tires, and broken lamps are much easier to locate than the remaining categories, which tend to be smaller in size. These predictions have an end goal of being used as an input for damage cost prediction.