Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

Alice Chen

Dynamic Selective Protection for Sparse Iterative Solvers

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Sumaiya Shomaji
Suzanne Shontz


Abstract

Soft errors are frequent occurrences within extensive computing platforms, primarily attributed to the growing size and intricacy of high-performance computing (HPC) systems. To safeguard scientific applications against such errors, diverse resilience approaches have been introduced, encompassing techniques like checkpointing, Algorithm-Based Fault Tolerance (ABFT), and replication, each operating at distinct tiers of defense. Notably, system-level replication often necessitates the duplication or triplication of the entire computational process, yielding substantial resilience-associated costs. This project introduces a method for dynamic selective safeguarding of sparse iterative solvers, with a focus on the Preconditioned Conjugate Gradient (PCG) solver, aiming to mitigate system level resilience overhead. For this method, we leverage machine learning (ML) to predict the impact of soft errors that strike different elements of a key computation (i.e., sparse matrix-vector multiplication) at different iterations of the solver. Based on the result of the prediction, we design a dynamic strategy to selectively protect those elements that would result in a large performance degradation if struck by soft errors. Experimental assessment validates the efficacy of our dynamic protection strategy in curbing resilience overhead in contrast to prevailing algorithms.


Grace Young

A Quantum Polynomial-Time Reduction for the Dihedral Hidden Subgroup Problem

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Esam El-Araby
Matthew Moore
Cuncong Zhong
KC Kong

Abstract

The last century has seen incredible growth in the field of quantum computing. Quantum computation offers the opportunity to find efficient solutions to certain computational problems which are intractable on classical computers. One class of problems that seems to benefit from quantum computing is the Hidden Subgroup Problem (HSP). The HSP includes, as special cases, the problems of integer factoring, discrete logarithm, shortest vector, and subset sum - making the HSP incredibly important in various fields of research.                               

The presented research examines the HSP for Dihedral groups with order 2^n and proves a quantum polynomial-time reduction to the so-called Codomain Fiber Intersection Problem (CFIP). The usual approach to the HSP relies on harmonic analysis in the domain of the problem and the best-known algorithm using this approach is sub-exponential, but still super-polynomial. The algorithm we will present deviates from the usual approach by focusing on the structure encoded in the codomain and uses this structure to direct a “walk” down the subgroup lattice terminating at the hidden subgroup.                               

Though the algorithm presented here is specifically designed for the DHSP, it has potential applications to many other types of the HSP. It is hypothesized that any group with a sufficiently structured subgroup lattice could benefit from the analysis developed here. As this approach diverges from the standard approach to the HSP it could be a promising step in finding an efficient solution to this problem.


Daniel Herr

Information Theoretic Physical Waveform Design with Application to Waveform-Diverse Adaptive-on-Transmit Radar

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

James Stiles, Chair
Chris Allen
Shannon Blunt
Carl Leuschen
Chris Depcik

Abstract

Information theory provides methods for quantifying the information content of observed signals and has found application in the radar sensing space for many years. Here, we examine a type of information derived from Fisher information known as Marginal Fisher Information (MFI) and investigate its use to design pulse-agile waveforms. By maximizing this form of information, the expected error covariance about an estimation parameter space may be minimized. First, a novel method for designing MFI optimal waveforms given an arbitrary waveform model is proposed and analyzed. Next, a transformed domain approach is proposed in which the estimation problem is redefined such that information is maximized about a linear transform of the original estimation parameters. Finally, informationally optimal waveform design is paired with informationally optimal estimation (receive processing) and are combined into a cognitive radar concept. Initial experimental results are shown and a proposal for continued research is presented.


Rachel Chang

Designing Pseudo-Random Staggered PRI Sequences

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Chris Allen
James Stiles


Abstract

In uniform pulse-Doppler radar, there is a well known trade-off between unambiguous Doppler and unambiguous range. Pulse repetition interval (PRI) staggering, a technique that involves modulating the interpulse times, addresses this trade-space allowing for expansion of the unambiguous Doppler domain with little range swath incursion. Random PRI staggering provides additional diversity, but comes at the cost of increased Doppler sidelobes. Thus, careful PRI sequence design is required to avoid spurious sidelobe peaks that could result in false alarms.

In this thesis, two random PRI stagger models are defined and compared, and sidelobe peak mitigation is discussed. First, the co-array concept (borrowed from the intuitively related field of sparse array design in the spatial domain) is utilized to examine the effect of redundancy on sidelobe peaks for random PRI sequences. Then, a sidelobe peak suppression technique is introduced that involves a gradient-based optimization of the random PRI sequences, producing pseudo-random sequences that are shown to significantly reduce spurious Doppler sidelobes in both simulation and experimentally.


Fatima Al-Shaikhli

Fiber Property Characterization based on Electrostriction

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
Shima Fardad


Abstract

Electrostriction in an optical fiber is introduced by the interaction between the forward propagated optical signal and the acoustic standing waves in the radial direction resonating between the center of the core and the cladding circumference of the fiber. The response of electrostriction is dependent on fiber parameters, especially the mode field radius. A novel technique is demonstrated to characterize fiber properties by means of measuring their electrostriction response under intensity modulation. As the spectral envelope of electrostriction-induced propagation loss is anti-symmetrical, the signal-to-noise ratio can be significantly increased by subtracting the measured spectrum from its complex conjugate. It is shown that if the transversal field distribution of the fiber propagation mode is Gaussian, the envelope of the electrostriction-induced loss spectrum closely follows a Maxwellian distribution whose shape can be specified by a single parameter determined by the mode field radius. 


Sohaib Kiani

Exploring Trustworthy Machine Learning from a Broader Perspective: Advancements and Insights

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Bo Luo, Chair
Alexandru Bardas
Fengjun Li
Cuncong Zhong
Xuemin Tu

Abstract

Machine learning (ML) has transformed numerous domains, demonstrating exceptional per-

performance in autonomous driving, medical diagnosis, and decision-making tasks. Nevertheless, ensuring the trustworthiness of ML models remains a persistent challenge, particularly with the emergence of new applications. The primary challenges in this context are the selection of an appropriate solution from a multitude of options, mitigating adversarial attacks, and advancing towards a unified solution that can be applied universally.

The thesis comprises three interconnected parts, all contributing to the overarching goal of improving trustworthiness in machine learning. Firstly, it introduces an automated machine learning (AutoML) framework that streamlines the training process, achieving optimum performance, and incorporating existing solutions for handling trustworthiness concerns. Secondly, it focuses on enhancing the robustness of machine learning models, particularly against adversarial attacks. A robust detector named "Argos" is introduced as a defense mechanism, leveraging the concept of two "souls" within adversarial instances to ensure robustness against unknown attacks. It incorporates the visually unchanged content representing the true label and the added invisible perturbation corresponding to the misclassified label. Thirdly, the thesis explores the realm of causal ML, which plays a fundamental role in assisting decision-makers and addressing challenges such as interpretability and fairness in traditional ML. By overcoming the difficulties posed by selective confounding in real-world scenarios, the proposed scheme utilizes dual-treatment samples and two-step procedures with counterfactual predictors to learn causal relationships from observed data. The effectiveness of the proposed scheme is supported by theoretical error bounds and empirical evidence using synthetic and real-world child placement data. By reducing the requirement for observed confounders, the applicability of causal ML is enhanced, contributing to the overall trustworthiness of machine learning systems.


Prashanthi Mallojula

On the Security of Mobile and Auto Companion Apps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Hongyang Sun
Huazhen Fang

Abstract

Today’s smartphone platforms have millions of applications, which not only access users’ private data but also information from the connected external services and IoT/CPS devices. Mobile application security involves protecting sensitive information and securing communication between the application and external services or devices. We focus on these two key aspects of mobile application security.

In the first part of this dissertation, we aim to ensure the security of user information collected by mobile apps. Mobile apps seek consent from users to approve various permissions to access sensitive information such as location and personal information. However, users often blindly accept permission requests and apps start to abuse this mechanism. As long as a permission is requested, the state-of-the-art security mechanisms will treat it as legitimate. We ask the question whether the permission requests are valid? We attempt to validate permission requests using statistical analysis on permission sets extracted from groups of functionally similar apps. We detected mobile applications with abusive permission access and measure the risk of information leaks through each mobile application.

Second, we propose to investigate the security of auto companion apps. Auto companion apps are mobile apps designed to remotely connect with cars to provide features such as diagnostics, navigation, entertainment, and safety alerts. However, this can lead to several security threats, for instance, onboard information of vehicles can be tracked or altered through a malicious app. We design a comprehensive security analysis framework on automotive companion apps all stages of communication and collaboration between vehicles and companion apps such as connection establishment, authentication, encryption, information storage, and Vehicle diagnostic and control command access. By conducting static and network traffic analysis of Android OBD apps, we identify a series of vulnerability scenarios. We further evaluate these vulnerabilities with vehicle-based testing and identify potential security threats associated with auto companion apps


Michael Neises

Trustworthy Measurements of a Linux Kernel and Layered Attestation via a Verified Microkernel

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Drew Davidson
Matthew Moore
Cuncong Zhong
Corey Maley

Abstract

Layered attestation is a process by which one can establish trust in a remote party. It is a special case of attestation in which different layers of the attesting system are handled distinctly. This type of trust is desirable because a vast and growing number of people depend on networked devices to go about their daily lives. Current architectures for remote attestation are lacking in process isolation, which is evidenced by the existence of virtual machine escape exploits. This implies a deficiency of trustworthy ways to determine whether a networked Linux system has been exploited. The seL4 microkernel, uniquely in the world, has machine-checked proofs concerning process confidentiality and integrity. The seL4 microkernel is leveraged here to provide a verified level of software-based process isolation. When complemented with a comprehensive collection of measurements, this architecture can be trusted to report its own corruption. The architecture is described, implemented, and tested against a variety of exploits, which are detected using introspective measurement techniques.


Blake Douglas Bryant

Building Better with Blocks – A Novel Secure Multi-Channel Internet Memory Information Control (S-MIMIC) Protocol for Complex Latency Sensitive Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Bo Luo
Reza Barati

Abstract

Multimedia networking is the area of study associated with the delivery of heterogeneous data including, but not limited to, imagery, video, audio, and interactive content. Multimedia and communication network researchers have continually struggled to devise solutions for addressing the three core challenges in multimedia delivery: security, reliability, and performance. Solutions to these challenges typically exist in a spectrum of compromises achieving gains in one aspect at the cost of one or more of the others. Networked videogames represent the pinnacle of multimedia presented in a real-time interactive format. Continual improvements to multimedia delivery have led to tools such as buffering, redundant coupling of low-resolution alternative data streams, congestion avoidance, and forced in-order delivery of best-effort service; however, videogames cannot afford to pay the latency tax of these solutions in their current state.

I developed the Secure Multi-Channel Internet Memory Information Control (S-MIMIC) protocol as a novel solution to address these challenges. The S-MIMIC protocol leverages recent developments in blockchain and distributed ledger technology, coupled with creative enhancements to data representation and a novel data model. The S-MIMIC protocol also implements various novel algorithms for create, read, update, and delete (CRUD) interactions with distributed ledger and blockchain technologies. For validation, the S-MIMIC protocol was integrated with an open source open source First-Person Shooter (FPS) videogame to demonstrate its ability to transfer complex data structures under extreme network latency demands. The S-MIMIC protocol demonstrated improvements in confidentiality, integrity, availability and data read operations under all test conditions. Data write performance of S-MIMIC is slightly below traditional TCP-based networking in unconstrained networks, but matches performance in networks exhibiting 150 milliseconds of delay or more.

Though the S-MIMIC protocol was evaluated for use in networked videogames, its potential uses are far reaching with promising applicability to medical information, legal documents, financial transactions, information security threat feeds and many other use cases that require security, reliability and performance guarantees.


Zeyan Liu

Towards Robust Deep Learning Systems against Stealthy Attacks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Zijun Yao
John Symons

Abstract

The deep neural network (DNN) models are the core components of the machine learning solutions. However, their wide adoption in real-world applications raises increasing security concerns. Various attacks have been proposed against DNN models, such as the evasion and backdoor attacks. Attackers utilize adversarially altered samples, which are supposed to be stealthy and imperceptible to human eyes, to fool the targeted model into misbehaviors. This could result in severe consequences, such as self-driving cars ignoring traffic signs or colliding with pedestrians.

In this work, we aim to investigate the security and robustness of deep learning systems against stealthy attacks. To do this, we start by reevaluating the stealthiness assumptions made by the start-of-the-art attacks through a comprehensive study. We implement 20 representative attacks on six benchmark datasets. We evaluate the visual stealthiness of the attack samples using 24 metrics for image similarity or quality and over 30,000 annotations in a user study. Our results show that the majority of the existing attacks introduce non-negligible perturbations that are not stealthy. Next, we propose a novel model-poisoning neural Trojan, namely LoneNeuron, which introduces only minimum modification to the host neural network with a single neuron after the first convolution layer. LoneNeuron responds to feature-domain patterns that transform into invisible, sample-specific, and polymorphic pixel-domain watermarks. With high attack specificity, LoneNeuron achieves a 100% attack success rate and does not compromise the primary task performance. Additionally, its unique watermark polymorphism further improves watermark randomness, stealth, and resistance to Trojan detection.