Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Zhaohui Wang

Detection and Mitigation of Cross-App Privacy Leakage and Interaction Threats in IoT Automation

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to everyday life, enabling users to deploy automation rules and develop IoT apps tailored to their specific needs. However, modern IoT ecosystems consist of numerous devices, applications, and platforms that interact continuously. As a result, users are increasingly exposed to complex and subtle security and privacy risks that are difficult to fully comprehend. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats. In addition, violations of memory integrity can undermine the security guarantees on which IoT apps rely.

The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app interaction chains formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate risk levels based on inferences. In addition, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks.

The second approach addresses cross-app interaction threats in IoT automation systems by leveraging a logic-based analysis model grounded in event relations. We formalize event relationships, detect event interferences, and classify rule conflicts, then generate risk scores and conflict rankings to enable comprehensive conflict detection and risk assessment. To mitigate the identified interaction threats, an optimization-based approach is employed to reduce risks while preserving system functionality. This approach ensures comprehensive coverage of cross-app interaction threats and provides a robust solution for detecting and resolving rule conflicts in IoT environments.

To support the development and rigorous evaluation of these security analyses, we further developed a large-scale, manually verified, and comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. All proposed approaches are evaluated using this dataset of real-world apps, collectively offering valuable insights and practical tools for enhancing IoT security and privacy against cross-app threats. Furthermore, we examine the integrity of the execution environment that supports IoT apps. We show that, even under non-privileged execution, carefully crafted memory access patterns can induce bit flips in physical memory, allowing attackers to corrupt data and compromise system integrity without requiring elevated privileges.


Shawn Robertson

A Low-Power Low-Throughput Communications Solution for At-Risk Populations in Resource Constrained Contested Environments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

In resource‑constrained contested environments (RCCEs), communications are routinely censored, surveilled, or disrupted by nation‑state adversaries, leaving at‑risk populations—including protesters, dissidents, disaster‑affected communities, and military units—without secure connectivity. This dissertation introduces MeshBLanket, a Bluetooth Mesh‑based framework designed for low‑power, low‑throughput messaging with minimal electromagnetic spectrum exposure. Built on commercial off‑the‑shelf hardware, MeshBLanket extends the Bluetooth Mesh specification with automated provisioning and network‑wide key refresh to enhance scalability and resilience.

We evaluated MeshBLanket through field experimentation (range, throughput, battery life, and security enhancements) and qualitative interviews with ten senior U.S. Army communications experts. Thematic analysis revealed priorities of availability, EMS footprint reduction, and simplicity of use, alongside adoption challenges and institutional skepticism. Results demonstrate that MeshBLanket maintains secure messaging under load, supports autonomous key refresh, and offers operational relevance at the forward edge of battlefields.

Beyond military contexts, parallels with protest environments highlight MeshBLanket’s broader applicability for civilian populations facing censorship and surveillance. By unifying technical experimentation with expert perspectives, this work contributes a proof‑of‑concept communications architecture that advances secure, resilient, and user‑centric connectivity in environments where traditional infrastructure is compromised or weaponized.


Past Defense Notices

Dates

Will Thomas

Static Analysis and Synthesis of Layered Attestation Protocols

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Trust is a fundamental issue in computer security. Frequently, systems implicitly trust in other

systems, especially if configured by the same administrator. This fallacious reasoning stems from the belief

that systems starting from a known, presumably good, state can be trusted. However, this statement only

holds for boot-time behavior; most non-trivial systems change state over time, and thus runtime behavior is

an important, oft-overlooked aspect of implicit trust in system security.

    To address this, attestation was developed, allowing a system to provide evidence of its runtime behavior to a

verifier. This evidence allows a verifier to make an explicit informed decision about the system’s trustworthiness.

As systems grow more complex, scalable attestation mechanisms become increasingly important. To apply

attestation to non-trivial systems, layered attestation was introduced, allowing attestation of individual

components or layers, combined into a unified report about overall system behavior. This approach enables

more granular trust assessments and facilitates attestation in complex, multi-layered architectures. With the

complexity of layered attestation, discerning whether a given protocol is sufficiently measuring a system, is

executable, or if all measurements are properly reported, becomes increasingly challenging.

    In this work, we will develop a framework for the static analysis and synthesis of layered attestation protocols,

enabling more robust and adaptable attestation mechanisms for dynamic systems. A key focus will be the

static verification of protocol correctness, ensuring the protocol behaves as intended and provides reliable

evidence of the underlying system state. A type system will be added to the Copland layered attestation

protocol description language to allow basic static checks, and extended static analysis techniques will be

developed to verify more complex properties of protocols for a specific target system. Further, protocol

synthesis will be explored, enabling the automatic generation of correct-by-construction protocols tailored to

system requirements.


David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

In high dynamic-range environments, matched-filter radar performance is often sidelobe-limited with correlation error being fundamentally constrained by the TB of the collective emission. To contend with the regulatory necessity of spectral containment, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining responses from distinct pulses from within a pulse-agile emission. In contrast to most complementary subsets, which were discovered via brute force under the notion of phase-coding, these comp-FM waveform subsets achieve CSC while preserving hardware compatibility since they are FM. Although comp-FM addressed a primary limitation of complementary signals (i.e., hardware distortion), CSC hinges on the exact reconstruction of autocorrelation terms to suppress sidelobes, from which optimality is broken for Doppler shifted signals. This work introduces a Doppler-generalized comp-FM (DG-comp-FM) framework that extends the cancellation condition to account for the anticipated unambiguous Doppler span after post-summing. While this framework is developed for use within a combine-before-Doppler processing manner, it can likewise be employed to design an entire coherent processing interval (CPI) to minimize range-sidelobe modulation (RSM) within the radar point-spread-function (PSF), thereby introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori. 

Some radar systems operate with multiple emitters, as in the case of Multiple-input-multiple-output (MIMO) radar. Whereas a single emitter must contend with the self-inflicted autocorrelation sidelobes, MIMO systems must likewise contend with the cross-correlation with coincident (in time and spectrum) emissions from other emitters. As such, the determination of "orthogonal waveforms" comprises a large portion of research within the MIMO space, with a small majority now recognizing that true orthogonality is not possible for band-limited signals (albeit, with the exclusion of TDMA). The notion of complementary-FM is proposed for exploration within a MIMO context, whereby coherently combining responses can achieve CSC as well as cross-correlation cancellation for a wide Doppler space. By effectively minimizing cross-correlation terms, this enables improved channel separation on receive as well as improved estimation capability due to reduced correlation error. Proposal items include further exploration/characterization of the space, incorporating an explicit spectral 


Jigyas Sharma

SEDPD: Sampling-Enhanced Differentially Private Defense against Backdoor Poisoning Attacks of Image Classification

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Han Wang, Chair
Drew Davidson
Dongjie Wang


Abstract

Recent advancements in explainable artificial intelligence (XAI) have brought significant transparency to machine learning by providing interpretable explanations alongside model predictions. However, this transparency has also introduced vulnerabilities, enhancing adversaries’ ability for the model decision processes through explanation-guided attacks. In this paper, we propose a robust, model-agnostic defense framework to mitigate these vulnerabilities by explanations while preserving the utility of XAI. Our framework employs a multinomial sampling approach that perturbs explanation values generated by techniques such as SHAP and LIME. These perturbations ensure differential privacy (DP) bounds, disrupting adversarial attempts to embed malicious triggers while maintaining explanation quality for legitimate users. To validate our defense, we introduce a threat model tailored to image classification tasks. By applying our defense framework, we train models with pixel-sampling strategies that integrate DP guarantees, enhancing robustness against backdoor poisoning attacks with XAI. Extensive experiments on widely used datasets, such as CIFAR-10, MNIST, CIFAR-100 and Imagenette, and models, including ConvMixer and ResNet-50, show that our approach effectively mitigates explanation-guided attacks without compromising the accuracy of the model. We also test our defense performance against other backdoor attacks, which shows our defense framework can detect other type backdoor triggers very well. This work highlights the potential of DP in securing XAI systems and ensures safer deployment of machine learning models in real-world applications.


Dimple Galla

Intelligent Application for Cold Email Generation: Business Outreach

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Cold emailing remains an effective strategy for software service companies to improve organizational reach by acquiring clients. Generic emails often fail to get a response.
This project leverages Generative AI to automate the cold email generation. This project is built with the Llama-3.1 model and a Chroma vector database that supports the semantic search of keywords in the job description that matches the project portfolio links of software service companies. The application automatically extracts the technology related job openings for Fortune 500 companies. Users can either select from these extracted job postings or manually enter URL of a job posting, after which the system generates email and sends email upon approval. Advanced techniques like Chain-of-Thought Prompting and Few-Shot Learning were applied to improve the relevance making the email more responsive. This AI driven approach improves engagement and simplifies the business development process for software service companies.


Shahima Kalluvettu Kuzhikkal

Machine Learning Based Predictive Maintenance for Automotive Systems

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni
Hongyang Sun

Abstract

Predictive maintenance plays a central role in reducing vehicle downtime and improving operational efficiency by using data-driven methods to classify the condition of automotive engines. Rather than relying on fixed service schedules or reacting to unexpected breakdowns, this approach leverages machine learning to distinguish between healthy and failed engines based on operational data.

In this project, engine telemetry data capturing key parameters such as engine speed, fuel pressure, and coolant temperature was used to train and evaluate several machine learning models, including logistic regression, random forest, k-nearest neighbors, and a neural network. To further enhance predictive performance, ensemble strategies such as soft voting and stacking were applied. The stacking ensemble, which combines the strengths of multiple classifiers through a meta-learning approach, demonstrated particularly effective results.

This classification-based framework demonstrates how data-driven fault detection can enhance automotive maintenance operations. By identifying engine failures more reliably, machine learning enables safer transportation, reduces maintenance costs, and enhances overall vehicle dependability. Beyond individual vehicles, such approaches have broader applications in fleet management, where proactive decision-making can improve service continuity, reduce operational risks, and increase customer satisfaction.


Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Aidan Schmelzle

Exploration of Human Design with Genetic Algorithms as Artistic Medium for Color Images

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Arvin Agah, Chair
David Johnson
Jennifer Lohoefener


Abstract

Genetic Algorithms (GAs), a subclass of evolutionary algorithms, seek to apply the concept of natural selection to promote the optimization and furtherance of “something” designated by the user. GAs generate a population of chromosomes represented as value strings, score each chromosome with a “fitness function” on a defined set of criteria, and mutate future generations depending on the scores ascribed to each chromosome. In this project, each chromosome is a bitstring representing one canvased color artwork. Artworks are scored with a variety of design fundamentals and user preference. The artworks are then evolved through thousands of generations and the final piece is computationally drawn for analysis. While the rise of gradient-based optimization has resulted in more limited use-cases of GAs, genetic algorithms still have applications in various settings such as hyperparameter tuning, mathematical optimization, reinforcement learning, and black box scenarios. Neural networks are favored presently in image generation due to their pattern recognition and ability to produce new content; however, in cases where a user is seeking to implement their own vision through careful algorithmic refinement, genetic algorithms still find a place in visual computing.


Zara Safaeipour

Task-Aware Communication Computation Co-Design for Wireless Edge AI Systems

When & Where:


Nichols Hall, Room 246

Committee Members:

Morteza Hashemi, Chair
Van Ly Nguyen
Dongjie Wang


Abstract

Wireless edge systems typically need to complete timely computation and inference tasks under strict power, bandwidth, latency, and processing constraints. As AI models and datasets grow in size and complexity, the traditional model of sending all data to a remote cloud or running full inference on edge device becomes impractical. This creates a need for communication-computation co-design to enable efficient AI task processing at the wireless edge. To address this problem, we investigate task-aware communication-computation optimization for two specific problem settings.

First, we explore semantic communication that transmits only the information essential for the receiver’s computation tasks. We propose a semantic-aware and goal-oriented communication method for object detection. Our proposed approach is built upon the auto-encoders, with the encoder and the decoder are respectively implemented at the transmitter and receiver to extract semantic information for the specific computation goal (e.g., object detection). Numerical results show that transmitting only the necessary semantic features significantly improves the overall system efficiency.

Second, we study collaborative inference in wireless edge networks, where energy-constrained devices aim to complete delay-sensitive inference tasks. The inference computation is split between the device and an edge server, thereby achieving collaborative inference. We formulate a utility maximization problem under energy and delay constraints and propose Bayes-Split-Edge, which uses Bayesian optimization to determine the optimal transmission power and neural network split point. The proposed framework introduces a hybrid acquisition function that balances inference task utility, sample efficiency, and constraint violation penalties. We evaluate our approach using the VGG19 model, the ImageNet-Mini dataset, and real-world mMobile wireless channel datasets.

Overall, this research is aimed at developing efficient edge AI systems by incorporating the underlying wireless communications limitations and challenges into AI tasks processing.


Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.