Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Shailesh Pandey

Vision-Based Motor Assessment in Autism: Deep Learning Methods for Detection, Classification, and Tracking

When & Where:


Zoom defense, please email jgrisafe@ku.edu for defense information

Committee Members:

Sumaiya Shomaji, Chair
Shima Fardad
Zijun Yao
Cuncong Zhong
Lisa Dieker

Abstract

Motor difficulties show up in as many as 90% of people with autism, but surprisingly few, somewhere between 13% and 32%, ever get motor-focused help. A big part of the problem is that the tools we have for measuring motor skills either rely on a clinician's subjective judgment or require expensive lab equipment that most families will never have access to. This dissertation tries to close that gap with three projects, all built around the idea that a regular webcam and some well-designed deep learning models can do much of what costly motion-capture labs do today.

The first project asks a straightforward question: can a computer tell the difference between how someone with autism moves and how a typically developing person moves, just by watching a short video? The answer, it turns out, is yes. We built an ensemble of three neural networks, each one tuned to notice something different. One focuses on how joints coordinate with each other spatially, other zeroes in on the timing of movements, and the third learns which body-part relationships matter most for a given clip. We tested the system on 582 videos from 118 people (69 with ASD and 49 without) performing simple everyday actions like stirring or hammering. The ensemble correctly classifies 95.65% of cases. The timing-focused model on its own hits 92%, which is nearly 10 points better than a standard recurrent network baseline. And when all three models agree, accuracy climbs above 98%.

The second project deals with stimming, the repetitive behaviors like arm flapping, head banging, and spinning that are common in autism. Working with 302 publicly available videos, we trained a skeleton-based model that reaches 91% accuracy using body pose alone. That is more than double the 47% that previous work managed on the same benchmark. When we combine the pose information with what the raw video shows through a late fusion approach, accuracy jumps to 99.9%. Across the entire test set, only a single video was misclassified.

The third project is E-MotionSpec, a web platform designed for clinicians and researchers who want to track motor development over time. It runs in any browser, uses MediaPipe to estimate body pose in real time, and extracts 44 movement features grouped into seven domains covering things like how smoothly someone moves, how quickly they initiate actions, and how coordinated their limbs are. We validated the platform on the same 118-participant dataset and found 36 features with statistically significant differences between the ASD and typically developing groups. Smoothness and initiation timing stood out as the strongest discriminators. The platform also includes tools for comparing sessions over time using frequency analysis and dynamic time warping, so a clinician can actually see whether someone's motor patterns are changing across weeks or months.

Taken together, these three projects offer a practical path toward earlier identification and better ongoing monitoring of motor difficulties in autism. Everything runs on a webcam and a web browser. No motion-capture suits, no force plates, no specialized labs. That matters most for the families, schools, and clinics that need these tools the most and can least afford the alternatives.


Past Defense Notices

Dates

Rajmal Shaik

A Human-Guided Approach to Context-Aware SQL Generation in Multi-Agent Frameworks

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Dongjie Wang, Chair
Rachel Jarvis
David Johnson


Abstract

Querying information from relational databases often requires proficiency in SQL, creating a steep learning curve for users who lack programming or database management experience. Text-to-SQL systems aim to bridge this gap by automatically converting natural language questions into executable SQL statements. In recent years, multi-agent frameworks have gained traction for this task, as they enable complex query generation to be decomposed into specialized subtasks such as schema selection based on user intent, SQL synthesis, and refinement of SQL queries through execution-based error correction. This work explores the integration of a human feedback component within a multi-agent Text-to-SQL framework. Human input is introduced after the selector agent identifies relevant schemas and tables, offering targeted guidance before SQL generation. The objective is to examine how such feedback can improve the system’s accuracy and contextual understanding of queries. The implementation leverages OpenAI’s GPT-4.1 mini and GPT-4.1 nano models as the underlying language components. The evaluation is carried out using a standard Text-to-SQL benchmark dataset, focusing on key performance metrics such as execution accuracy and validity efficiency scores.


Ashish Adhikari

Towards assessing the security of program binaries

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Alex Bardas
Fengjun Li
Bo Luo

Abstract

Software vulnerabilities are widespread, often resulting from coding weaknesses and poor development practices. These vulnerabilities can be exploited by attackers, posing risks to confidentiality, integrity, and availability. To protect themselves, end-users of software may have an interest in knowing whether the software they purchase, and use is secure from potential attacks. Our work is motivated by this need to automatically assess and rate the security properties of binary software.

While many researchers focus on developing techniques and tools to detect and mitigate vulnerabilities in binaries, our approach is different. We aim to determine whether the software has been developed with proper care. Our hypothesis is that software created with meticulous attention to security is less likely to contain exploitable vulnerabilities. As a first step, we examined the current landscape of binary-level vulnerability detection. We categorized critical coding weaknesses in compiled programming languages and conducted a detailed survey comparing static analysis techniques and tools designed to detect these weaknesses. Additionally, we evaluated the effectiveness of open-source CWE detection tools and analyzed their challenges. To further understand their efficacy, we conducted independent assessments using standard benchmarks.

To determine whether software is carefully and securely developed, we propose several techniques. So far, we have used machine learning and deep learning methods to identify the programming language of a binary at the functional level, enabling us to handle complex cases like mixed-language binaries and we assess whether vulnerable regions in the binary are protected with appropriate security mechanisms. Additionally, we explored the feasibility of detecting secure coding practices by examining adherence to SonarQube’s security-related coding conventions.

Next, we investigate whether compiler warnings generated during binary creation are properly addressed. Furthermore, we also aim to optimize the array bounds detection in the program binary. This enhanced array bounds detection will also increase the effectiveness of detecting secure coding conventions that are related to memory safety and buffer overflow vulnerabilities.

Our ultimate goal is to combine these techniques to rate the overall security quality of a given binary software.


Bayn Schrader

Implementation and Analysis of an Efficient Dual-Beam Radar-Communications Technique

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Fully digital arrays enable realization of dual-function radar-communications systems which generate multiple simultaneous transmit beams with different modulation structures in different spatial directions. These spatially diverse transmissions are produced by designing the individual wave forms transmitted at each antenna element that combine in the far-field to synthesize the desired modulations at the specified directions. This thesis derives a look-up table (LUT) implementation of the existing Far-Field Radiated Emissions Design (FFRED) optimization framework. This LUT implementation requires a single optimization routine for a set of desired signals, rather than the previous implementation which required pulse-to-pulse optimization, making the LUT approach more efficient. The LUT is generated by representing the waveforms transmitted by each element in the array as a sequence of beamformers, where the LUT contains beamformers based on the phase difference between the desired signal modulations. The globally optimal beamformers, in terms of power efficiency, can be realized via the Lagrange dual problem for most beam locations and powers. The Phase-Attached Radar-Communications (PARC) waveform is selected for the communications waveform alongside a Linear Frequency Modulated (LFM) waveform for the radar signal. A set of FFRED LUTs are then used to simulate a radar transmission to verify the utility of the radar system. The same LUTs are then used to estimate the communications performance of a system with varying levels of the array knowledge uncertainty.


Will Thomas

Static Analysis and Synthesis of Layered Attestation Protocols

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Trust is a fundamental issue in computer security. Frequently, systems implicitly trust in other

systems, especially if configured by the same administrator. This fallacious reasoning stems from the belief

that systems starting from a known, presumably good, state can be trusted. However, this statement only

holds for boot-time behavior; most non-trivial systems change state over time, and thus runtime behavior is

an important, oft-overlooked aspect of implicit trust in system security.

    To address this, attestation was developed, allowing a system to provide evidence of its runtime behavior to a

verifier. This evidence allows a verifier to make an explicit informed decision about the system’s trustworthiness.

As systems grow more complex, scalable attestation mechanisms become increasingly important. To apply

attestation to non-trivial systems, layered attestation was introduced, allowing attestation of individual

components or layers, combined into a unified report about overall system behavior. This approach enables

more granular trust assessments and facilitates attestation in complex, multi-layered architectures. With the

complexity of layered attestation, discerning whether a given protocol is sufficiently measuring a system, is

executable, or if all measurements are properly reported, becomes increasingly challenging.

    In this work, we will develop a framework for the static analysis and synthesis of layered attestation protocols,

enabling more robust and adaptable attestation mechanisms for dynamic systems. A key focus will be the

static verification of protocol correctness, ensuring the protocol behaves as intended and provides reliable

evidence of the underlying system state. A type system will be added to the Copland layered attestation

protocol description language to allow basic static checks, and extended static analysis techniques will be

developed to verify more complex properties of protocols for a specific target system. Further, protocol

synthesis will be explored, enabling the automatic generation of correct-by-construction protocols tailored to

system requirements.


David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

In high dynamic-range environments, matched-filter radar performance is often sidelobe-limited with correlation error being fundamentally constrained by the TB of the collective emission. To contend with the regulatory necessity of spectral containment, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining responses from distinct pulses from within a pulse-agile emission. In contrast to most complementary subsets, which were discovered via brute force under the notion of phase-coding, these comp-FM waveform subsets achieve CSC while preserving hardware compatibility since they are FM. Although comp-FM addressed a primary limitation of complementary signals (i.e., hardware distortion), CSC hinges on the exact reconstruction of autocorrelation terms to suppress sidelobes, from which optimality is broken for Doppler shifted signals. This work introduces a Doppler-generalized comp-FM (DG-comp-FM) framework that extends the cancellation condition to account for the anticipated unambiguous Doppler span after post-summing. While this framework is developed for use within a combine-before-Doppler processing manner, it can likewise be employed to design an entire coherent processing interval (CPI) to minimize range-sidelobe modulation (RSM) within the radar point-spread-function (PSF), thereby introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori. 

Some radar systems operate with multiple emitters, as in the case of Multiple-input-multiple-output (MIMO) radar. Whereas a single emitter must contend with the self-inflicted autocorrelation sidelobes, MIMO systems must likewise contend with the cross-correlation with coincident (in time and spectrum) emissions from other emitters. As such, the determination of "orthogonal waveforms" comprises a large portion of research within the MIMO space, with a small majority now recognizing that true orthogonality is not possible for band-limited signals (albeit, with the exclusion of TDMA). The notion of complementary-FM is proposed for exploration within a MIMO context, whereby coherently combining responses can achieve CSC as well as cross-correlation cancellation for a wide Doppler space. By effectively minimizing cross-correlation terms, this enables improved channel separation on receive as well as improved estimation capability due to reduced correlation error. Proposal items include further exploration/characterization of the space, incorporating an explicit spectral 


Jigyas Sharma

SEDPD: Sampling-Enhanced Differentially Private Defense against Backdoor Poisoning Attacks of Image Classification

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Han Wang, Chair
Drew Davidson
Dongjie Wang


Abstract

Recent advancements in explainable artificial intelligence (XAI) have brought significant transparency to machine learning by providing interpretable explanations alongside model predictions. However, this transparency has also introduced vulnerabilities, enhancing adversaries’ ability for the model decision processes through explanation-guided attacks. In this paper, we propose a robust, model-agnostic defense framework to mitigate these vulnerabilities by explanations while preserving the utility of XAI. Our framework employs a multinomial sampling approach that perturbs explanation values generated by techniques such as SHAP and LIME. These perturbations ensure differential privacy (DP) bounds, disrupting adversarial attempts to embed malicious triggers while maintaining explanation quality for legitimate users. To validate our defense, we introduce a threat model tailored to image classification tasks. By applying our defense framework, we train models with pixel-sampling strategies that integrate DP guarantees, enhancing robustness against backdoor poisoning attacks with XAI. Extensive experiments on widely used datasets, such as CIFAR-10, MNIST, CIFAR-100 and Imagenette, and models, including ConvMixer and ResNet-50, show that our approach effectively mitigates explanation-guided attacks without compromising the accuracy of the model. We also test our defense performance against other backdoor attacks, which shows our defense framework can detect other type backdoor triggers very well. This work highlights the potential of DP in securing XAI systems and ensures safer deployment of machine learning models in real-world applications.


Dimple Galla

Intelligent Application for Cold Email Generation: Business Outreach

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Cold emailing remains an effective strategy for software service companies to improve organizational reach by acquiring clients. Generic emails often fail to get a response.
This project leverages Generative AI to automate the cold email generation. This project is built with the Llama-3.1 model and a Chroma vector database that supports the semantic search of keywords in the job description that matches the project portfolio links of software service companies. The application automatically extracts the technology related job openings for Fortune 500 companies. Users can either select from these extracted job postings or manually enter URL of a job posting, after which the system generates email and sends email upon approval. Advanced techniques like Chain-of-Thought Prompting and Few-Shot Learning were applied to improve the relevance making the email more responsive. This AI driven approach improves engagement and simplifies the business development process for software service companies.


Shahima Kalluvettu Kuzhikkal

Machine Learning Based Predictive Maintenance for Automotive Systems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni
Hongyang Sun

Abstract

Predictive maintenance plays a central role in reducing vehicle downtime and improving operational efficiency by using data-driven methods to classify the condition of automotive engines. Rather than relying on fixed service schedules or reacting to unexpected breakdowns, this approach leverages machine learning to distinguish between healthy and failed engines based on operational data.

In this project, engine telemetry data capturing key parameters such as engine speed, fuel pressure, and coolant temperature was used to train and evaluate several machine learning models, including logistic regression, random forest, k-nearest neighbors, and a neural network. To further enhance predictive performance, ensemble strategies such as soft voting and stacking were applied. The stacking ensemble, which combines the strengths of multiple classifiers through a meta-learning approach, demonstrated particularly effective results.

This classification-based framework demonstrates how data-driven fault detection can enhance automotive maintenance operations. By identifying engine failures more reliably, machine learning enables safer transportation, reduces maintenance costs, and enhances overall vehicle dependability. Beyond individual vehicles, such approaches have broader applications in fleet management, where proactive decision-making can improve service continuity, reduce operational risks, and increase customer satisfaction.


Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.