Defense Notices
All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.
Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.
Upcoming Defense Notices
David Felton
Optimization and Evaluation of Physical Complementary Radar WaveformsWhen & Where:
Nichols Hall, Room 129 (Apollo Auditorium)
Committee Members:
Shannon Blunt, ChairRachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata
Abstract
The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.
Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.
Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems.
This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing.
Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.
Hao Xuan
Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge DiscoveryWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Cuncong Zhong, ChairFengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu
Abstract
Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.
These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.
First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.
Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.
Pramil Paudel
Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless ImagingWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Fengjun Li, ChairAlex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao
Abstract
Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference.
We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks.
Sharmila Raisa
Digital Coherent Optical System: Investigation and MonitoringWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Rongqing Hui, ChairMorteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han
Abstract
Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.
We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.
After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.
In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.
Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.
Arman Ghasemi
Task-Oriented Data Communication and Compression for Timely Forecasting and Control in Smart GridsWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairAlexandru Bardas
Prasad Kulkarni
Taejoon Kim
Zsolt Talata
Abstract
Advances in sensing, communication, and intelligent control have transformed power systems into data-driven smart grids, where forecasting and intelligent decision-making are essential components. Modern smart grids include distributed energy resources (DERs), renewable generation, battery energy storage systems, and large numbers of grid-edge devices that continuously generate time-series data. At the same time, increasing renewable penetration introduces substantial uncertainty in generation, net load, and market operations, while communication networks impose bandwidth, latency, and reliability constraints on timely data delivery. This dissertation addresses how time-series forecasting, data compression, and task-oriented wireless communication can be jointly designed for smart grid applications.
First, we study weather-aware distributed energy management in prosumer-centric microgrids and show that incorporating day-ahead weather information into decision-making improves battery dispatch and reduces the impact of renewable uncertainty. Second, we introduce forecasting-aware energy management in both wholesale and retail electricity markets, highlighting how renewable generation forecasting affects pricing, scheduling, and uncertainty mitigation. Third, we develop and evaluate deep learning methods for renewable generation forecasting, showing that Transformer-based models outperform recurrent baselines such as RNN and LSTM for wind and solar prediction tasks.
Building on this forecasting foundation, we develop a communication-efficient forecasting framework in which high-dimensional smart grid measurements are compressed into low-dimensional latent representations before transmission. This framework is extended into a task-oriented communication system that jointly optimizes data relevance and information timeliness, so that the receiver obtains compressed updates that remain useful for downstream forecasting tasks. Finally, we extend this framework to a distributed multi-node uplink setting, where multiple grid sensors share a bandwidth-limited channel, and develop scheduling policy that improves both the timeliness and task-relevance of received updates.
Pardaz Banu Mohammad
Towards Early Detection of Alzheimer’s Disease based on Speech using Reinforcement Learning Feature SelectionWhen & Where:
Eaton Hall, Room 2001B
Committee Members:
Arvin Agah, ChairDavid Johnson
Sumaiya Shomaji
Dongjie Wang
Sara Wilson
Abstract
Alzheimer’s Disease (AD) is a progressive, irreversible neurodegenerative disorder and the leading cause of dementia worldwide, affecting an estimated 55 million people globally. The window of opportunity for intervention is demonstrably narrow, making reliable early-stage detection a clinical and scientific imperative. While current diagnostic techniques such as neuroimaging and cerebrospinal fluid (CSF) biomarkers carry well-defined limitations in scalability, cost, and access equity, speech has emerged as a compelling non-invasive proxy for cognitive function evaluation.
This work presents a novel approach for using acoustic feature selection as a decision-making technique and implements it using deep reinforcement learning. Specifically, we use a Deep-Q-Network (DQN) agent to navigate a high dimensional feature space of over 6,000 acoustic features extracted using the openSMILE toolkit, dynamically constructing maximally discriminative and non-redundant features subsets. In order to capture the latent structural dependencies among
acoustic features which classifier and wrapper methods have difficulty to model, we introduce the Graph Convolutional Network (GCN) based correlation awareness feature representation layer that operates as an auxiliary input to the DQN state encoder. Post selection interpretability is reinforced through TF-IDF weighting and K-means clustering which together yield both feature level and cluster level explanations that are clinically actionable. The framework is evaluated across five classifiers, namely, support vector machines (SVM), logistic regression, XGBoost, random forest, and feedforward neural network. We use 10-fold stratified cross-validation on established benchmarks of datasets, including DementiaBank Pitt Corpus, Ivanova, and ADReSS challenge data. The proposed approach is benchmarked against state-of-the-art feature selection methods such as LASSO, Recursive feature selection, and mutual information selectors. This research contributes to three primary intellectual advances: (1) a graph augmented state representation that encodes inter-feature relational structure within a reinforcement learning agent, (2) a clinically interpretable pipeline that bridges the gap between algorithmic performance and translational utility, and (3) multilingual data approach for the reinforcement learning agent framework. This study has direct implications for equitable, low-cost and scalable AD screening in both clinical and community settings.
Zhou Ni
Bridging Federated Learning and Wireless Networks: From Adaptive Learning to FLdriven System OptimizationWhen & Where:
Nichols Hall, Room 246 (Executive Conference Room)
Committee Members:
Morteza Hashemi, ChairFengjun Li
Van Ly Nguyen
Han Wang
Shawn Keshmiri
Abstract
Federated learning (FL) has emerged as a promising distributed machine learning
framework that enables multiple devices to collaboratively train models without sharing raw
data, thereby preserving privacy and reducing the need for centralized data collection. However,
deploying FL in practical wireless environments introduces two major challenges. First, the data
generated across distributed devices are often heterogeneous and non-IID, which makes a single
global model insufficient for many users. Second, learning performance in wireless systems is
strongly affected by communication constraints such as interference, unreliable channels, and
dynamic resource availability. This PhD research aims to address these challenges by bridging
FL methods and wireless networks.
In the first thrust, we develop personalized and adaptive FL methods given the underlying
wireless link conditions. To this end, we propose channel-aware neighbor selection and
similarity-aware aggregation in wireless device-to-device (D2D) learning environments. We
further investigate the impacts of partial model update reception on FL performance. The
overarching goal of the first thrust is to enhance FL performance under wireless constraints.
Next, we investigate the opposite direction and raise the question: How can FL-based distributed
optimization be used for the design of next-generation wireless systems? To this end, we
investigate communication-aware participation optimization in vehicular networks, where
wireless resource allocation affects the number of clients that can successfully contribute to FL.
We further extend this direction to integrated sensing and communication (ISAC) systems,
where personalized FL (PFL) is used to support distributed beamforming optimization with joint
sensing and communication objectives.
Overall, this research establishes a unified framework for bridging FL and wireless networks. As
a future direction, this work will be extended to more realistic ISAC settings with dynamic
spectrum access, where communication, sensing, scheduling, and learning performance must be
considered jointly.
Past Defense Notices
Guojun Xiong
Distributed Filter Design and Power Allocation for Small-Cell MIMO NetworkWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Taejoon Kim, ChairMorteza Hashemi
Erik Perrins
Abstract
The deluge of wireless data traffic catalyzed by the growing number of data-intensive devices has motivated the deployment of small-cell in fifth-generation (5G) networks. A primary challenge for deploying dense small-cell networks comes from the lack of practical techniques that efficiently handle the increased network interference at a low cost. This has aroused considerable interest in the distributed precoder/combiner coordination techniques that leverage the channel reciprocity, while relying on the local channel state information (CSI) available at each communication end. In this project, a distributed approach is proposed to the problem of signal-to-interference-plus-noise-ratio (SINR)-guaranteed power minimization (SGPM) for multicell multiuser (MCMU) multiple-input multiple-output (MIMO) systems. Unlike prior SGPM approaches, the technique is based on solving necessary and sufficient optimality conditions, which are derived by decomposing the original problem into forward and backward (FB) subproblems, while ensuring the strong duality of each subproblems. The proposed distributed SGPM algorithm makes use of FB adaptation and Jacobi recursion for iterative filter design and power allocation, respectively, which guarantees target SINR performance as well as its convergence to a stationary point. Simulation results illustrate the enhanced power efficiency with the performance guarantees of the proposed method compared to the existing distributed techniques.
Jason Baxter
An FPGA Implementation of Carrier Phase and Symbol Timing Synchronization for 16-APSKWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Erik Perrins, ChairTaejoon Kim
Carl Leuschen
Abstract
Proper synchronization between a transmitter and receiver, in terms of carrier phase and symbol timing, is critical for reliable communication. Carrier phase synchronization is related to the frequency translation hardware, where perfect synchronization means that the local oscillators of the transmitter’s upconverter and receiver’s downconverter are aligned in phase and frequency. Timing synchronization is related to the analog-to-digital converter in the receiver, where perfect synchronization means that samples of the received signal are taken at transmitted symbol times. Perfect synchronization is unlikely in practical systems for a number of reasons, including hardware limitations and the independence of the transmitter and receiver. This thesis explores an FPGA implementation of a PLL-based carrier phase and symbol timing synchronization subsystem as part of a 16-APSK aeronautical telemetry receiver. The theory behind this subsystem is presented, and the hardware implementation of each component is described. Results demonstrate successful demodulation of a test signal, and system performance is shown to be comparable to double-precision floating point simulations in terms of error vector magnitude, synchronization lock time, and BER.
Adam Petz
An Infrastructure for Faithful Execution of Remote Attestation ProtocolsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Perry Alexander, ChairDrew Davidson
Andy Gill
Prasad Kulkarni
Emily Witt
Abstract
Security decisions often rely on trust. An emerging technology for gaining trust in a remote computing system is remote attestation. Remote attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network. Although many existing approaches to remote attestation wisely adopt a layered architecture–where the bottom layers measure layers above–the dependencies between components remain static and measurement orderings fixed. Further, they are often restricted to a specialized embedded platform, or only perform shallow measurements on a component of interest without considering the trustworthiness of its context or the attestation mechanism itself. For modern computing environments with diverse topologies, we can no longer fix a target architecture any more than we can fix a protocol to measure that architecture.
Copland is a domain-specific language and formal framework that provides a vocabulary for specifying the goals of layered attestation protocols. It remains configurable by measurement capability, participants, and platform topology, yet provides a precise reference semantics that characterizes system measurement events and evidence handling; a foundation for comparing protocol alternatives. The aim of this work is to refine the Copland semantics to a more fine-grained notion of attestation manager execution–a high-privilege thread of control responsible for invoking attestation services and bundling evidence results. This refinement consists of two cooperating components called the Copland Compiler and the Attestation Virtual Machine (AVM). The Copland Compiler translates a Copland specification into a sequence of primitive attestation instructions to be executed in the AVM. These components are implemented as functional programs in the Coq proof assistant and proved correct with respect to the Copland reference semantics. This formal connection is critical in order to trust that protocol specifications are faithfully carried out by the attestation manger implementation. We also explore synthesis of appraisal routines that leverage the same formally verified infrastructure to interpret evidence generated by Copland protocols and make trust decisions.
Xiaohan Zhang
Golf Ball Detection and Tracking Based on Convolutional Neural NetworksWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Richard Wang, ChairBo Luo
Cuncong Zhong
Abstract
With the rapid growth in artificial intelligence (AI), AI technologies have completely changed our lives. Especially in the sports field, AI starts to play the role in auxiliary training, data management, and systems that analyze training performance for athletes. Golf is one of the most popular sports in the world, which frequently utilize video analysis during training. Video analysis falls into the computer vision category. Computer vision is the field that benefited most during the AI revolution, especially the emerging of deep learning.
This thesis focuses on the problem of real-time detection and tracking of a golf ball from video sequences. We introduce an efficient and effective solution by integrating object detection and a discrete Kalman model. For ball detection, five classical convolutional neural network based detection models are implemented, including Faster R-CNN, SSD, RefineDet, YOLOv3, and its lite version, YOLOv3 tiny. At the tracking stage, a discrete Kalman filter is employed to predict the location of the golf ball based on its previous observations. As a trade-off between the detection accuracy and detection time, we took advantage of image patches rather than the entire images for detection. In order to train the detection models and test the tracking algorithm, we collect and annotate a collection of golf ball dataset. Extensive experimental results are performed to demonstrate the effectiveness of the proposed technique and compare the performance of different neural network models.
Ronald Moore
AIDA: An Assistant for Workers with Intellectual and Developmental DisabilitiesWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Andrew Williams, ChairArvin Agah
Michael Branicky
Richard Wang
Abstract
Roughly 1 in 5 people in the United States have an intellectual or developmental disability (IDD), which is a substantial amount of the population. In the realm of human-robot interaction, there have been many attempts to help these individuals lead more productive and independent lives. However, many of these solutions focus on helping individuals with IDD develop social skills. For the solutions that do focus on helping people with IDD increase their work productivity, many of these involve giving the user control over a robot that augments the worker’s capabilities. In this thesis, it is posited that an autonomous agent could effectively assist workers with IDD, thereby increasing their productivity. The artificially intelligent disability assistant (AIDA) is an autonomous agent that uses social scaffolding techniques to assist workers with IDD. Before designing the system, data was gathered by observing workers with IDD perform tasks in a light manufacturing facility.
To test the hypothesis, an initial Wizard-of-Oz (WoZ) experiment was conducted where subjects had to assemble a box using only either their dominant or non-dominant hand. During the experiment, subjects could ask the robot for assistance, but a human operator controlled whether the robot provided a response. After the experiment, subjects were required to complete a feedback survey. Additionally, this feedback was used to refine and build the autonomous system for AIDA.
The autonomous system is composed of data collection and processing modules, a scaffolding algorithm module, and robot action output modules. This system was tested in a simulated experiment using video recordings from the initial experiment. The results of the simulated experiment provide support for the hypothesis that an autonomous agent using social scaffolding techniques can increase the productivity of workers with IDD. In the future, it is desired to test the current system in a real-time experiment before using it on workers with IDD.
Sairath Bhattacharjya
A Novel Zero-Trust Framework to Secure IoT CommunicationsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Hossein Saiedian, ChairAlex Bardas
Fengjun Li
Abstract
The phenomenal growth of the Internet of Things (IoT) has highlighted the security and privacy concerns associated with these devices. The research literature on the security architectures of IoT makes evident that we need to define and formalize a framework to secure the communications among these devices. To do so, it is important to focus on a zero-trust framework that will work on the principle premise of "trust no one, verify everyone" for every request and response.
In this thesis, we emphasize the immediate need for such a framework and propose a zero-trust communication model for IoT that addresses security and privacy concerns. We employ the existing cryptographic techniques to implement the framework so that it can be easily integrated into the current network infrastructures. The framework provides an end-to-end security framework for users and devices to communicate with each other privately. It is often stated that it is difficult to implement high-end encryption algorithm within the limited resource of an IoT device. For our work, we built a temperature and humidity sensor using NodeMCU V3 and were able to implement the framework and successfully evaluate and document its efficient operation. We defined four areas for evaluation and validation, namely, security of communications, memory utilization of the device, response time of operations, and cost of its implementation. For every aspect we defined a threshold to evaluate and validate our findings. The results are satisfactory and are documented. Our framework provides an easy-to-use solution where the security infrastructure acts as a backbone for every communication to and from the IoT devices.
Royce Bohnert
Experiments with mmWave RadarWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Christopher Allen, ChairErik Perrins
James Stiles
Abstract
The IWR6843 mmWave radar device from Texas Instruments (TI) is a complete FMCW radar system-on-chip operating in the 60 to 64 GHz frequency range. The IWR6843ISK is an evaluation platform which includes the IWR6843 connected to patch antennas on a PCB. In this project, the viability of using the IWR6843 sensor for short-range detection of small, high-velocity targets is investigated. Some of the limitations of the device are explored and a specific radar configuration is proposed. To confirm the applicability of the proposed configuration, a similar configuration is used with the IWR6843ISK-ODS evaluation platform to observe the launch of a foil-wrapped dart. The evaluation platform is used to collect raw data, which is then post-processed in a Python program to generate a range-doppler heatmap visualization of the data.
Matthew Taylor
Defending Against Typosquatting Attacks In Programming Language-Based Package RepositoriesWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Drew Davidson, ChairAlex Bardas
Bo Luo
Abstract
Program size and complexity have dramatically increased over time. To reduce their work-load, developers began to utilize package managers. These packages managers allow third-party functionality, contained in units called packages, to be quickly imported into a project. Due to their utility, packages have become remarkably popular. The largest package repository, npm, has more than 1.2 million publicly available packages and serves more than 80 billion package downloads per month. In recent years, this popularity has attracted the attention of malicious users. Attackers have the ability to upload packages which contain malware. To increase the number of victims, attackers regularly leverage a tactic called typosquatting, which involves giving the malicious package a name that is very similar to the name of a popular package. Users who make a typo when trying to install the popular package fall victim to the attack and are instead served the malicious payload. The consequences of typosquatting attacks can be catastrophic. Historical typosquatting attacks have exported passwords, stolen cryptocurrency, and opened reverse shells.This thesis focuses on typosquatting attacks in package repositories. It explores the extent to which typosquatting exists in npm and PyPI (the de facto standard package repositories for Node.js and Python, respectively), proposes a practical defense against typosquatting attacks, and quantifies the efficacy of the proposed defense. The presented solution incurs an acceptable temporal overhead of 2.5% on the standard package installation process and is expected to affect approximately 0.5% of all weekly package downloads. Furthermore, it has been used to discover a particularly high-profile typosquatting perpetrator, which was then reported and has since been deprecated by npm. Typosquatting is an important yet preventable problem. This thesis recommends pack-ages creators to protect their own packages with a technique called defensive typosquatting and repository maintainers to protect all users through augmentations to their package managers or automated monitoring of the package namespace.
Jacob Fustos
Attacks and Defenses against Speculative Execution Based Side ChannelsWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Heechul Yun, ChairAlex Bardas
Drew Davidson
Abstract
Modern high-performance processors utilize techniques such as speculation and out-of-order execution to improve performance. Unfortunately, the recent Spectre and Meltdown exploits take advantage of these techniques to circumvent the security of the system. As speculation and out-of-order execution are complex features meant to enhance performance, full mitigation of these exploits often incurs high overhead and partial defenses need careful considerations to ensure attack surface is not left vulnerable. In this work, we explore these attacks deeper, both how they are executed and how to defend against them.
We first propose a novel micro-architectural extension, SpectreGuard, that takes a data-centric approach to the problem. SpectreGuard attempts to reduce the performance penalty that is common with Spectre defenses by allowing software and hardware to work together. This collaborative approach allows software to tag secrets at the page granularity, then the underlying hardware can optimize secret data for security, while optimizing all other data for performance. Our research shows that such a combined approach allows for the creation of processors that can both achieve a high level of security while maintaining high performance.
We then propose SpectreRewind, a novel strategy for executing speculative execution attacks. SpectreRewind reverses the flow of traditional speculative execution attacks, creating new covert channels that transmit secret data to instructions that appear to execute logically before the attack even takes place. We find this attack vector can bypass some state-of-the-art proposed hardware defenses, as well as increase attack surface for certain Meltdown-type attacks on existing machines. Our research into this area helps towards completing the understanding of speculative execution attacks so that defenses can be designed with the knowledge of all attack vectors.
Venkata Siva Pavan Kumar Nelakurthi
Venkata Siva Pavan Kumar NelakurthiWhen & Where:
Zoom Meeting, please contact jgrisafe@ku.edu for link
Committee Members:
Jerzy Grzymala-Busse, ChairPrasad Kulkarni
Guanghui Wang
Abstract
In data mining, rule induction is a process of extracting formal rules from decision
tables, where the later are the tabulated observations, which typically consist of few
attributes, i.e., independent variables and a decision, i.e., a dependent variable. Each
tuple in the table is considered as a case, and there could be n number of cases for a
table specifying each observation. The efficiency of the rule induction depends on how
many cases are successfully characterized by the generated set of rules, i.e., ruleset.
There are different rule induction algorithms, such as LEM1, LEM2, MLEM2. In the real
world, datasets will be imperfect, inconsistent, and incomplete. MLEM2 is an efficient
algorithm to deal with such sorts of data, but the quality of rule induction largely
depends on the chosen classification strategy. We tried to compare the 16 classification
strategies of rule induction using MLEM2 on incomplete data. For this, we
implemented MLEM2 for inducing rulesets based on the selection of the type of
approximation, i.e., singleton, subset or concept, and the value of alpha for calculating
probabilistic approximations. A program called rule checker is used to calculate the
error rate based on the classification strategy specified. To reduce the anomalies, we
used ten-fold cross-validation to measure the error rate for each classification. Error
rates for the above strategies are being calculated for different datasets, compared, and
presented.