Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Zhaohui Wang

Detection and Mitigation of Cross-App Privacy Leakage and Interaction Threats in IoT Automation

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to everyday life, enabling users to deploy automation rules and develop IoT apps tailored to their specific needs. However, modern IoT ecosystems consist of numerous devices, applications, and platforms that interact continuously. As a result, users are increasingly exposed to complex and subtle security and privacy risks that are difficult to fully comprehend. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats. In addition, violations of memory integrity can undermine the security guarantees on which IoT apps rely.

The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app interaction chains formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate risk levels based on inferences. In addition, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks.

The second approach addresses cross-app interaction threats in IoT automation systems by leveraging a logic-based analysis model grounded in event relations. We formalize event relationships, detect event interferences, and classify rule conflicts, then generate risk scores and conflict rankings to enable comprehensive conflict detection and risk assessment. To mitigate the identified interaction threats, an optimization-based approach is employed to reduce risks while preserving system functionality. This approach ensures comprehensive coverage of cross-app interaction threats and provides a robust solution for detecting and resolving rule conflicts in IoT environments.

To support the development and rigorous evaluation of these security analyses, we further developed a large-scale, manually verified, and comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. All proposed approaches are evaluated using this dataset of real-world apps, collectively offering valuable insights and practical tools for enhancing IoT security and privacy against cross-app threats. Furthermore, we examine the integrity of the execution environment that supports IoT apps. We show that, even under non-privileged execution, carefully crafted memory access patterns can induce bit flips in physical memory, allowing attackers to corrupt data and compromise system integrity without requiring elevated privileges.


Shawn Robertson

A Low-Power Low-Throughput Communications Solution for At-Risk Populations in Resource Constrained Contested Environments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

In resource‑constrained contested environments (RCCEs), communications are routinely censored, surveilled, or disrupted by nation‑state adversaries, leaving at‑risk populations—including protesters, dissidents, disaster‑affected communities, and military units—without secure connectivity. This dissertation introduces MeshBLanket, a Bluetooth Mesh‑based framework designed for low‑power, low‑throughput messaging with minimal electromagnetic spectrum exposure. Built on commercial off‑the‑shelf hardware, MeshBLanket extends the Bluetooth Mesh specification with automated provisioning and network‑wide key refresh to enhance scalability and resilience.

We evaluated MeshBLanket through field experimentation (range, throughput, battery life, and security enhancements) and qualitative interviews with ten senior U.S. Army communications experts. Thematic analysis revealed priorities of availability, EMS footprint reduction, and simplicity of use, alongside adoption challenges and institutional skepticism. Results demonstrate that MeshBLanket maintains secure messaging under load, supports autonomous key refresh, and offers operational relevance at the forward edge of battlefields.

Beyond military contexts, parallels with protest environments highlight MeshBLanket’s broader applicability for civilian populations facing censorship and surveillance. By unifying technical experimentation with expert perspectives, this work contributes a proof‑of‑concept communications architecture that advances secure, resilient, and user‑centric connectivity in environments where traditional infrastructure is compromised or weaponized.


Past Defense Notices

Dates

Theodore Harbison

Posting Passwords: How social media information can be leveraged in password guessing attacks

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hossein Saiedian, Chair
Fengjun Li
Heechul Yun


Abstract

The explosion of social media, while fostering connection, inadvertently exposes personal details that heighten password vulnerability. This thesis tackles this critical link, aiming to raise public awareness of the dangers of weak passwords and excessive online sharing. We introduce a novel password guessing algorithm, SocGuess, which capitalizes on the rich trove of information on social media profiles. SocGuess leverages Named Entity Recognition (NER) to identify key data points within this information, such as dates, locations, and names. To further enhance its accuracy, SocGuess is trained on the rockyou dataset, a large collection of leaked passwords. By identifying different kinds of entities within these passwords, SocGuess can calculate the probability of these entities appearing in passwords. Armed with this knowledge, SocGuess dynamically generates password guesses in order of probability by filling these entity placeholders with the corresponding data points harvested from the target’s social media profiles. This targeted approach shows SocGuess to crack 33% more passwords than existing algorithms during experimentation, demonstrably surpassing traditional methods.


Ethan Grantz

Swarm: A Backend-Agnostic Language for Simple Distributed Programming

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Drew Davidson, Chair
Perry Alexander
Prasad Kulkarni


Abstract

Writing algorithms for a parallel or distributed environment has always been plagued with a variety of challenges, from supervising synchronous reads and writes, to managing job queues and avoiding deadlock. While many languages have libraries or language constructs to mitigate these obstacles, very few attempt to remove those challenges entirely, and even fewer do so while divorcing the means of handling those problems from the means of parallelization or distribution. This project introduces a language called Swarm, which attempts to do just that.

Swarm is a first-class parallel/distributed programming language with modular, swappable parallel drivers. It is intended for everything from multi-threaded local computation on a single machine to large scientific computations split across many nodes in a cluster.

Swarm contains next to no explicit syntax for typical parallel logic, only containing keywords for declaring which variables should reside in shared memory, and describing what code should be parallelized. The remainder of the logic (such as waiting for the results from distributed jobs or locking shared accesses) are added in when compiling to a custom bytecode called Swarm Virtual Instructions (SVI). SVI is then executed by a virtual machine whose parallelization logic is abstracted out, such that the same SVI bytecode can be executed in any parallel/distributed environment.


Johnson Umeike

Optimizing gem5 Simulator Performance: Profiling Insights and Userspace Networking Enhancements

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Mohammad Alian, Chair
Prasad Kulkarni
Heechul Yun


Abstract

Full-system simulation of computer systems is critical for capturing the complex interplay between various hardware and software components in future systems. Modeling the network subsystem is indispensable for the fidelity of full-system simulations due to the increasing importance of scale-out systems. Over the last decade, the network software stack has undergone major changes, with userspace networking stacks and data-plane networks rapidly replacing the conventional kernel network stack. Nevertheless, the current state-of-the-art architectural simulator, gem5, still employs kernel networking, which precludes realistic network application scenarios.

First, we perform a comprehensive profiling study to identify and propose architectural optimizations to accelerate a state-of-the-art architectural simulator. We choose gem5 as the representative architectural simulator, run several simulations with various configurations, perform a detailed architectural analysis of the gem5 source code on different server platforms, tune both system and architectural settings for running simulations, and discuss the future opportunities in accelerating gem5 as an important application. Our detailed profiling of gem5 reveals that its performance is extremely sensitive to the size of the L1 cache. Our experimental results show that a RISC-V core with 32KB data and instruction cache improves gem5’s simulation speed by 31%∼61% compared with a baseline core with 8KB L1 caches. Second, this work extends gem5’s networking capabilities by integrating kernel-bypass/user-space networking based on the DPDK framework, significantly enhancing network throughput and reducing latency. By enabling user-space networking, the simulator achieves a substantial 6.3× improvement in network bandwidth compared to traditional Linux software stacks. Our hardware packet generator model (EtherLoadGen) provides up to a 2.1× speedup in simulation time. Additionally, we develop a suite of networking micro-benchmarks for stress testing the host network stack, allowing for efficient evaluation of gem5’s performance. Through detailed experimental analysis, we characterize the performance differences when running the DPDK network stack on both real systems and gem5, highlighting the sensitivity of DPDK performance to various system and microarchitecture parameters.


Adam Sarhage

Design of Multi-Section Coupled Line Coupler

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Chris Allen
Glenn Prescott


Abstract

Coupled line couplers are used as directional couplers to enable measurement of forward and reverse power in RF transmitters. These measurements provide valuable feedback to the control loops regulating transmitter power output levels. This project seeks to synthesize, simulate, build, and test a broadband, five-stage coupled line coupler with a 20 dB coupling factor. The coupler synthesis is evaluated against ideal coupler components in Keysight ADS.  Fabrication of coupled line couplers is typically accomplished with a stripline topology, but a microstrip topology is additionally evaluated. Measurements from the fabricated coupled line couplers are then compared to the Keysight ADS EM simulations, and some explanations for the differences are provided. Additionally, measurements from a commercially available broadband directional coupler are provided to show what can be accomplished with the right budget.


Mohsen Nayebi Kerdabadi

Contrastive Learning of Temporal Distinctiveness for Survival Analysis in Electronic Health Records

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Zijun Yao, Chair
Fengjun Li
Cuncong Zhong


Abstract

Survival analysis plays a crucial role in many healthcare decisions, where the risk prediction for the events of interest can support an informative outlook for a patient's medical journey. Given the existence of data censoring, an effective way of survival analysis is to enforce the pairwise temporal concordance between censored and observed data, aiming to utilize the time interval before censoring as partially observed time-to-event labels for supervised learning. Although existing studies mostly employed ranking methods to pursue an ordering objective, contrastive methods which learn a discriminative embedding by having data contrast against each other, have not been explored thoroughly for survival analysis. Therefore, we propose a novel Ontology-aware Temporality-based Contrastive Survival (OTCSurv) analysis framework that utilizes survival durations from both censored and observed data to define temporal distinctiveness and construct negative sample pairs with adjustable hardness for contrastive learning. Specifically, we first use an ontological encoder and a sequential self-attention encoder to represent the longitudinal EHR data with rich contexts. Second, we design a temporal contrastive loss to capture varying survival durations in a supervised setting through a hardness-aware negative sampling mechanism. Last, we incorporate the contrastive task into the time-to-event predictive task with multiple loss components. We conduct extensive experiments using a large EHR dataset to forecast the risk of hospitalized patients who are in danger of developing acute kidney injury (AKI), a critical and urgent medical condition. The effectiveness and explainability of the proposed model are validated through comprehensive quantitative and qualitative studies.


Jarrett Zeliff

An Analysis of Bluetooth Mesh Security Features in the Context of Secure Communications

When & Where:


Eaton Hall, Room 1

Committee Members:

Alexandru Bardas, Chair
Drew Davidson
Fengjun Li


Abstract

Significant developments in communication methods to help support at-risk populations have increased over the last 10 years. We view at-risk populations as a group of people present in environments where the use of infrastructure or electricity, including telecommunications, is censored and/or dangerous. Security features that accompany these communication mechanisms are essential to protect the confidentiality of its user base and the integrity and availability of the communication network.

In this work, we look at the feasibility of using Bluetooth Mesh as a communication network and analyze the security features that are inherent to the protocol. Through this analysis we determine the strengths and weaknesses of Bluetooth Mesh security features when used as a messaging medium for at risk populations and provide improvements to current shortcomings. Our analysis includes looking at the Bluetooth Mesh Networking Security Fundamentals as described by the Bluetooth Sig: Encryption and Authentication, Separation of Concerns, Area isolation, Key Refresh, Message Obfuscation, Replay Attack Protection, Trashcan Attack Protection, and Secure Device Provisioning.  We look at how each security feature is implemented and determine if these implementations are sufficient in protecting the users from various attack vectors. For example, we examined the Blue Mirror attack, a reflection attack during the provisioning process which leads to the compromise of network keys, while also assessing the under-researched key refresh mechanism. We propose a mechanism to address Blue-Mirror-oriented attacks with the goal of creating a more secure provisioning process.  To analyze the key refresh mechanism, we implemented our own full-fledged Bluetooth Mesh network and implemented a key refresh mechanism. Through this we form an assessment of the throughput, range, and impacts of a key refresh in both lab and field environments that demonstrate the suitability of our solution as a secure communication method.


Daniel Johnson

Probability-Aware Selective Protection for Sparse Iterative Solvers

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Hongyang Sun, Chair
Perry Alexander
Zijun Yao


Abstract

With the increasing scale of high-performance computing (HPC) systems, transient bit-flip errors are now more likely than ever, posing a threat to long-running scientific applications. A substantial portion of these applications involve the simulation of partial differential equations (PDEs) modeling physical processes over discretized spatial and temporal domains, with some requiring the solving of sparse linear systems. While these applications are often paired with system-level application-agnostic resilience techniques such as checkpointing and replication, the utilization of these techniques imposes significant overhead. In this work, we present a probability-aware framework that produces low-overhead selective protection schemes for the widely used Preconditioned Conjugate Gradient (PCG) method, whose performance can heavily degrade due to error propagation through the sparse matrix-vector multiplication (SpMV) operation. Through the use of a straightforward mathematical model and an optimized machine learning model, our selective protection schemes incorporate error probability to protect only certain crucial operations. An experimental evaluation using 15 matrices from the SuiteSparse Matrix Collection demonstrates that our protection schemes effectively reduce resilience overheads, often outperforming or matching both baseline and established protection schemes across all error probabilities.


Javaria Ahmad

Discovering Privacy Compliance Issues in IoT Apps and Alexa Skills Using AI and Presenting a Mechanism for Enforcing Privacy Compliance

When & Where:


LEEP2, Room 2425

Committee Members:

Bo Luo, Chair
Alex Bardas
Tamzidul Hoque
Fengjun Li
Michael Zhuo Wang

Abstract

The growth of IoT and voice assistant (VA) apps poses increasing concerns about sensitive data leaks. While privacy policies are required to describe how these apps use private user data (i.e., data practice), problems such as missing, inaccurate, and inconsistent policies have been repeatedly reported. Therefore, it is important to assess the actual data practice in apps and identify the potential gaps between the actual and declared data usage. We find that app stores lack in regulating the compliance between the app practices and their declaration, so we use AI to discover the compliance issues in these apps to assist the regulators and developers. For VA apps, we also develop a mechanism to enforce the compliance using AI. In this work, we conduct a measurement study using our framework called IoTPrivComp, which applies an automated analysis of IoT apps’ code and privacy policies to identify compliance gaps. We collect 1,489 IoT apps with English privacy policies from the Play Store. IoTPrivComp detects 532 apps with sensitive external data flows, among which 408 (76.7%) apps have undisclosed data leaks. Moreover, 63.4% of the data flows that involve health and wellness data are inconsistent with the practices disclosed in the apps’ privacy policies. Next, we focus on the compliance issues in skills. VAs, such as Amazon Alexa, are integrated with numerous devices in homes and cars to process user requests using apps called skills. With their growing popularity, VAs also pose serious privacy concerns. Sensitive user data captured by VAs may be transmitted to third-party skills without users’ consent or knowledge about how their data is processed. Privacy policies are a standard medium to inform the users of the data practices performed by the skills. However, privacy policy compliance verification of such skills is challenging, since the source code is controlled by the skill developers, who can make arbitrary changes to the behaviors of the skill without being audited; hence, conventional defense mechanisms using static/dynamic code analysis can be easily escaped. We present Eunomia, the first real-time privacy compliance firewall for Alexa Skills. As the skills interact with the users, Eunomia monitors their actions by hijacking and examining the communications from the skills to the users, and validates them against the published privacy policies that are parsed using a BERT-based policy analysis module. When non-compliant skill behaviors are detected, Eunomia stops the interaction and warns the user. We evaluate Eunomia with 55,898 skills on Amazon skills store to demonstrate its effectiveness and to provide a privacy compliance landscape of Alexa skills.


Xiangyu Chen

Toward Efficient Deep Learning for Computer Vision Applications

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Fengjun Li
Hongguo Xu

Abstract

Deep learning leads the performance in many areas of computer vision. However, after a decade of research, it tends to require larger datasets and more complex models, leading to heightened resource consumption across all fronts. Regrettably, meeting these requirements proves challenging in many real-life scenarios. First, both data collection and labeling processes entail substantial labor and time investments. This challenge becomes especially pronounced in domains such as medicine, where identifying rare diseases demands meticulous data curation. Secondly, the large size of state-of-the-art models, such as ViT, Stable Diffusion, and ConvNext, hinders their deployment on resource-constrained platforms like mobile devices. Research indicates pervasive redundancies within current neural network structures, exacerbating the issue. Lastly, even with ample datasets and optimized models, the time required for training and inference remains prohibitive in certain contexts. Consequently, there is a burgeoning interest among researchers in exploring avenues for efficient artificial intelligence.

This study endeavors to delve into various facets of efficiency within computer vision, including data efficiency, model efficiency, as well as training and inference efficiency. The data efficiency is improved from the perspective of increasing information brought by given image inputs and reducing redundancies of RGB image formats. To achieve this, we propose to integrate both spatial and frequency representations to finetune the classifier. Additionally, we propose explicitly increasing the input information density in the frequency domain by deleting unimportant frequency channels. For model efficiency, we scrutinize the redundancies present in widely used vision transformers. Our investigation reveals that trivial attention in their attention modules covers useful non-trivial attention due to its large amount. We propose mitigating the impact of accumulated trivial attention weights. To increase training efficiency, we propose SuperLoRA, a generation of LoRA adapter, to fine-tune pretrained models with few iterations and extremely-low parameters. Finally, a model simplification pipeline is proposed to further reduce inference time on mobile devices. By addressing these challenges, we aim to advance the practicality and performance of computer vision systems in real-world applications.


Krushi Patel

Image Classification & Segmentation based on Enhanced CNN and Transformer Networks

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Fengjun Li, Chair
Prasad Kulkarni
Bo Luo
Cuncong Zhong
Xinmai Yang

Abstract

Convolutional Neural Networks (CNNs) have significantly enhanced performance across various computer vision tasks such as image recognition and segmentation, owing to their robust representation capabilities. To further boost CNN performance, a self-attention module is integrated after each network layer. Transformer-based models, which leverage a multi-head self-attention module as their core component, have recently demonstrated outstanding performance. However, several challenges persist, including the limitation to class-specific channels in CNNs, the constrained receptive field in local transformers, and the incorporation of redundant features and the absence of multi-scale features in U-Net type segmentation architectures.

In our study, we propose new strategies to tackle these challenges. (1) We propose a novel channel-based self-attention module to diversify the focus more on the discriminative and significant channels, and the module can be embedded at the end of any backbone network for image classification. (2) To mitigate noise introduced by shallow encoder layers in U-Net architectures, we substitute skip connections with an Adaptive Global Context Module (AGCM). Additionally, we introduce the Semantic Feature Enhancement Module (SFEM) to enhance multi-scale features in polyp segmentation. (3) We introduce a Multi-scaled Overlapped Attention (MOA) mechanism within local transformer-based networks for image classification, facilitating the establishment of long-range dependencies and initiation of neighborhood window communication. (4) We propose a pioneering Fuzzy Attention Module designed to prioritize challenging pixels, thereby augmenting polyp segmentation performance. (5) We develop a novel dense attention gate module that aggregates features from all preceding layers to compute attention scores, refining global features in polyp segmentation tasks. Moreover, we design a new multi-layer horizontally extended decoder architecture to enhance local feature refinement in polyp segmentation.