Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Prasad Kulkarni
Mohammad Alian

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools.

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Past Defense Notices

Dates

Sameera Katamaneni

Revolutionizing Forensic Identification: A Dual-Method Facial Recognition Paradigm for Enhanced Criminal Identification

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Hongyang Sun



Abstract

In response to the challenges posed by increasingly sophisticated criminal behaviour that strategically evades conventional identification methods, this research advocates for a paradigm shift in forensic practices. Departing from reliance on traditional biometric techniques such as DNA matching, eyewitness accounts, and fingerprint analysis, the study introduces a pioneering biometric approach centered on facial recognition systems. Addressing the limitations of established methods, the proposed methodology integrates two key components. Firstly, facial features are meticulously extracted using the Histogram of Oriented Gradients (HOG) methodology, providing a robust representation of individualized facial characteristics. Subsequently, a face recognition system is implemented, harnessing the power of the K-Nearest Neighbours machine learning classifier. This innovative dual-method approach aims to significantly enhance the accuracy and reliability of criminal identification, particularly in scenarios where conventional methods prove inadequate. By capitalizing on the inherent uniqueness of facial features, this research strives to introduce a formidable tool for forensic practitioners, offering a more effective means of addressing the evolving landscape of criminal tactics and safeguarding the integrity of justice systems. 


Thomas Atkins

Secure and Auditable Academic Collections Storage via Hyperledger Fabric-Based Smart Contracts

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Fengjun Li
Bo Luo


Abstract

This paper introduces a novel approach to manage collections of artifacts through smart contract access control, rooted in on-chain role-based property-level access control. This smart contract facilitates the lifecycle of these artifacts including allowing for the creation,  modification, removal, and historical auditing of the artifacts through both direct and suggested actions. This method introduces a collection object designed to store role privileges concerning state object properties. User roles are defined within an on-chain entity that maps users' signed identities to roles across different collections, enabling a single user to assume varying roles in distinct collections. Unlike existing key-level endorsement mechanisms, this approach offers finer-grained privileges by defining them on a per-property basis, not at the key level. The outcome is a more flexible and fine-grained access control system seamlessly integrated into the smart contract itself, empowering administrators to manage access with precision and adaptability across diverse organizational contexts.  This has the added benefit of allowing for the auditing of not only the history of the artifacts, but also for the permissions granted to the users.  


Christian Jones

Robust and Efficient Structure-Based Radar Receive Processing

When & Where:


Nichols Hall, Room 129 (Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Chris Allen
Suzanne Shontz
James Stiles
Zsolt Talata

Abstract

Legacy radar systems largely rely on repeated emission of a linear frequency modulated (LFM) or chirp waveform to ascertain scattering information from an environment. The prevalence of these chirp waveforms largely stems from their simplicity to generate, process, and the general robustness they provide towards hardware effects. However, this traditional design philosophy often lacks the flexibility and dimensionality needed to address the dynamic “complexification” of the modern radio frequency (RF) environment or achieve current operational requirements where unprecedented degrees of sensitivity, maneuverability, and adaptability are necessary.

Over the last couple of decades analog-to-digital and digital-to-analog technologies have advanced exponentially, resulting in tremendous design degrees of freedom and arbitrary waveform generation (AWG) capabilities that enable sophisticated design of emissions to better suit operational requirements. However, radar systems typically require high powered amplifiers (HPA) to contend with the two-way propagation. Thus, transmitter-amenable waveforms are effectively constrained to be both spectrally contained and constant amplitude, resulting in a non-convex NP-hard design problem.

While determining the global optimal waveform can be intractable for even modest time-bandwidth products (TB), locally optimal transmitter-amenable solutions that are “good enough” are often readily available. However, traditional matched filtering may not satisfy operational requirements for these sub-optimal emissions. Using knowledge of the transmitter-receiver chain, a discrete linear model can be formed to express the relationship between observed measurements and the complex scattering of the environment. This structured representation then enables more sophisticated least-square and adaptive estimation techniques to better satisfy operational needs, improve estimate fidelity, and extend dynamic range.

However, radar dimensionality can be enormous and brute force implementations of these techniques may have unwieldy computational burden on even cutting-edge hardware. Additionally, a discrete linear representation is fundamentally an approximation of the dynamic continuous physical reality and model errors may induce bias, create false detections, and limit dynamic range. As such, these structure-based approaches must be both computationally efficient and robust to reality.

Here several generalized discrete radar receive models and structure-based estimation schemes are introduced. Modifications and alternative solutions are then proposed to improve estimate fidelity, reduce computational complexity, and provide further robustness to model uncertainty.


Shawn Robertson

A secure framework for at risk populations in austere environments utilizing Bluetooth Mesh communications

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Huazhen Fang

Abstract

Austere environments are defined by the US Military as those regularly experiencing significant environmental hazards, have limited access to reliable electricity, or require prolonged use of body armor or chemical protection equipment.  We propose that in modern society, this definition can extend also to telecommunications infrastructure, areas where an active adversary controls the telecommunications infrastructure and works against the people such as protest areas in Iran, Russia, and China or areas experiencing conflict and war such as Eastern Ukraine.  People in these austere environments need basic text communications and the ability to share simple media like low resolution pictures.  This communication is complicated by the adversaries’ capabilities as a potential nation-state actor. To address this, Low Earth Orbit satellite clusters, like Starlink, can be used to exfiltrate communications completely independent of local infrastructure.  This, however, creates another issue as these satellite ground terminals are not inherently designed to support many users over a large area.  Traditional means of extending this connectivity create both power and security concerns.  We propose that Bluetooth Mesh can be used to extend connectivity and provide communications. 

Bluetooth Mesh provides a low signal footprint to reduce the risk of detection, blends into existent signals within the 2.4ghz spectrum, has security aspects in the specification, and devices can utilize small batteries maintaining a covert form factor.  To realize this security enhancements must be made to both the provisioning process of the Bluetooth Mesh network and a key management scheme that ensures the regular and secure changing of keys either in response to an adversary’s action or as a prevention of an adversary’s action must be implemented.  We propose a provisioning process using whitelists on both provisioner and device and uses attestation for passwords allowing devices to be provisioned on deployment to protect the at-risk population and prevent BlueMirror attacks.  We also propose, implement, and measure the impact of an automated key exchange that meets the Bluetooth Mesh 3 phase specification.  Our experimentation, in a field environment, shows that Bluetooth Mesh has the throughput, reliability and security to meet the requirements of at-risk populations in austere environments. 


Venkata Mounika Keerthi

Evaluating Dynamic Resource Management for Bulk Synchronous Parallel Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Bulk Synchronous Parallel (BSP) applications comprise distributed tasks that synchronize at periodic intervals, known as supersteps. Efficient resource management is critical for the performance of BSP applications, especially when deployed on multi-tenant cloud platforms. This project evaluates and extends some existing resource management algorithms for BSP applications, while focusing on dynamic schedulers to mitigate stragglers under variable workloads. In particular, a Dynamic Window algorithm is implemented to compute resource configurations optimized over a customizable timeframe by considering workload variability. The algorithm applies a discount factor prioritizing improvements in earlier supersteps to account for increasing prediction errors in future supersteps. It represents a more flexible approach compared to the Static Window algorithm that recomputes the resource configuration after a fixed number of supersteps. A comparative evaluation of the Dynamic Window algorithm against existing techniques, including the Static Window algorithm, a Dynamic Model Predictive Control (MPC) algorithm, and a Reinforcement Learning (RL) based algorithm, is performed to quantify potential reductions in application duration resulting from enhanced superstep-level customization. Further evaluations also show the impacts of window size and checkpoint (reconfiguration) cost on these algorithms, gaining insights into their dynamics and performance trade-offs.

Degree: MS Project Defense (CS)


Sohan Chandra

Predicting inorganic nitrogen content in the soil using Machine Learning

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Taejoon Kim, Chair
Prasad Kulkarni
Cuncong Zhong


Abstract

This ground-breaking project addresses a critical issue in crop production: precisely determining plant-available inorganic nitrogen (IN) in soil to optimize fertilization strategies. Current methodologies frequently struggle with the complexities of determining a soil's nitrogen content, resorting to approximations and labor-intensive soil testing procedures that can lead to the pitfalls of under or over-fertilization, endangering agricultural productivity. Recognizing the scarcity of historical inorganic nitrogen (IN) data, this solution employs a novel approach that employs Generative Adversarial Networks (GANs) to generate statistically similar inorganic nitrogen (IN) data. 

 

This synthetic data set works in tandem with data from the Decision Support System for Agrotechnology Transfer (DSSAT). To address the data's inherent time-series nature, we use the power of Long Short-Term Memory (LSTM) neural networks in our predictive model. The resulting model is a sophisticated and accurate tool that can provide reliable estimates without extensive soil testing. This not only ensures precision in nutrient management but is also a cost-effective and dependable solution for crop production optimization. 


Thomas Woodruff

Model Predictive Control of Nonlinear Latent Force Models

When & Where:


M2SEC, Room G535

Committee Members:

Jim Stiles, Chair
Michael Branicky
Heechul Yun


Abstract

Model Predictive Control (MPC) has emerged as a potent approach for controlling nonlinear systems in the robotics field and various other engineering domains. Its efficacy lies in its capacity to predictively optimize system behavior while accommodating state and input constraints. Although MPC typically relies on precise dynamic models to be effective, real-world dynamic systems often harbor uncertainties. Ignoring these uncertainties can lead to performance degradation or even failure in MPC.

Nonlinear latent force models, integrating latent uncertainties characterized as Gaussian processes, hold promise for effectively representing nonlinear uncertain systems. Specifically, these models incorporate the state-space representation of a Gaussian process into known nonlinear dynamics, providing the ability to simultaneously predict future states and uncertainties.

This thesis delves into the application of MPC to nonlinear latent force models, aiming to control nonlinear uncertain systems. We formulate a stochastic MPC problem and, to address the ensuing receding-horizon stochastic optimization problem, introduce a scenario-based approach for a deterministic approximation. The resulting scenario-based approach is assessed through simulation studies centered on the motion planning of an autonomous vehicle. The simulations demonstrate the controller's adeptness in managing constraints and consistently mitigating the effects of disturbances. This proposed approach holds promise for various robotics applications and beyond.


Sai Soujanya Ambati

BERT-NEXT: Exploring Contextual Sentence Understanding

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Hongyang Sun



Abstract

The advent of advanced natural language processing (NLP) techniques has revolutionized the way we handle textual data. This project presents the implementation of exploring contextual sentence understanding on the Quora Insincere Questions dataset using the pretrained BERT architecture. In this study, we explore the application of BERT, a bidirectional transformer model, for text classification tasks. The goal is to classify if a question contains hateful, disrespectful or toxic content. BERT represents the state-of-the-art in language representation models and has shown strong performance on various natural language processing tasks. In this project, the pretrained BERT base model is fine-tuned on a sample of the Quora dataset for next sentence prediction. Results show that with just 1% of the data (around 13,000 examples), the fine-tuned model achieves over 90% validation accuracy in identifying insincere questions after 4 epochs of training. This demonstrates the effectiveness of leveraging BERT for text classification tasks with minimal labeled data requirements. Being able to automatically detect toxic, hateful or disrespectful content is important to maintain healthy online discussions. However, the nuances of human language make this a challenging natural language processing problem. Insincere questions may contain offensive language, hate speech, or misinformation, making their identification crucial for maintaining a positive and safe online environment. In this project, we explore using the pretrained Bidirectional Encoder Representations from Transformers (BERT) model for next sentence prediction on the task of identifying insincere questions.


Swathi Koyada

Feature balancing of demographic data using SMOTE

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Prasad Kulkarni, Chair
Cuncong Zhong



Abstract

The research investigates the utilization of Synthetic Minority Oversampling Techniques (SMOTE) in the context of machine learning models applied to biomedical datasets, particularly focusing on mitigating demographic data disparities. The study is most relevant to underrepresented demographic data. The primary objective is to enhance the SMOTE methodology, traditionally designed for addressing class imbalances, to specifically tackle ethnic imbalances within feature representation. In contrast to conventional approaches that merely exclude race as a fundamental or additive factor without rectifying misrepresentation, this work advocates an innovative modification of the original SMOTE framework, emphasizing dataset augmentation based on participants' demographic backgrounds. The predominant aim of the project is to enhance and reshape the distribution to optimize model performance for unspecified demographic subgroups during training. However, the outcomes indicate that despite the application of feature balancing in this adapted SMOTE method, no statistically significant enhancement in accuracy was discerned. This observation implies that while rectifying imbalances is crucial, it may not independently suffice to overcome challenges associated with heterogeneity in species representation within machine learning models applied to biomedical databases. Consequently, further research endeavors are necessary to identify novel methodologies aimed at enhancing sampling accuracy and fairness within diverse populations.


Jessica Jeng

Exploiting Data Locality for Improving Multidimensional Variational Quantum Classification

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Drew Davidson
Prasad Kulkarni


Abstract

Quantum computing presents an opportunity to accelerate machine learning (ML) tasks on quantum processors in a similar vein to existing classical accelerators, such as graphical processing units (GPUs). In the classical domain, convolutional neural networks (CNNs) effectively exploit data locality using the convolution operation to reduce the number of fully-connected operations in multi-layer perceptrons (MLPs). Preserving data locality enables the pruning of training parameters, which results in reduced memory requirements and shorter training time without compromising classification accuracy. However, contemporary quantum machine learning (QML) algorithms do not leverage the data locality of input features in classification workloads, particularly for multidimensional data. This work presents a multidimensional quantum convolutional classifier (MQCC) that adapts the CNN structure to a variational quantum algorithm (VQA). The proposed MQCC uses quantum implementations of multidimensional convolution, pooling based on the quantum Haar transform (QHT) and partial measurement, and fully-connected operations. Time-complexity analysis will be presented to demonstrate the speedup of the proposed techniques in comparison to classical convolution and pooling operations on modern CPUs and/or GPUs. Experimental work is conducted on state-of-the-art quantum simulators from IBM Quantum and Xanadu modeling noise-free and noisy quantum devices. High-resolution multidimensional images are used to demonstrate the correctness and scalability of the convolution and pooling operations. Furthermore, the proposed MQCC model is tested on a variety of common datasets against multiple configurations of related ML and QML techniques. Based on standard metrics such as log loss, classification accuracy, number of training parameters, circuit depth, and gate count, it will be shown that MQCC can deliver a faithful implementation of CNNs on quantum machines. Additionally, it will be shown that by exploiting data locality MQCC can achieve improved classification over contemporary QML methods.