Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 129 (Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.    

Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.          

Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems. 

This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing. 

Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Pramil Paudel

Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless Imaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao

Abstract

Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference. 

We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks. 


Sharmila Raisa

Digital Coherent Optical System: Investigation and Monitoring

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han

Abstract

Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.

We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.

After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.

In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.

Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.


Arman Ghasemi

Task-Oriented Data Communication and Compression for Timely Forecasting and Control in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alexandru Bardas
Prasad Kulkarni
Taejoon Kim
Zsolt Talata

Abstract

Advances in sensing, communication, and intelligent control have transformed power systems into data-driven smart grids, where forecasting and intelligent decision-making are essential components. Modern smart grids include distributed energy resources (DERs), renewable generation, battery energy storage systems, and large numbers of grid-edge devices that continuously generate time-series data. At the same time, increasing renewable penetration introduces substantial uncertainty in generation, net load, and market operations, while communication networks impose bandwidth, latency, and reliability constraints on timely data delivery. This dissertation addresses how time-series forecasting, data compression, and task-oriented wireless communication can be jointly designed for smart grid applications.

First, we study weather-aware distributed energy management in prosumer-centric microgrids and show that incorporating day-ahead weather information into decision-making improves battery dispatch and reduces the impact of renewable uncertainty. Second, we introduce forecasting-aware energy management in both wholesale and retail electricity markets, highlighting how renewable generation forecasting affects pricing, scheduling, and uncertainty mitigation. Third, we develop and evaluate deep learning methods for renewable generation forecasting, showing that Transformer-based models outperform recurrent baselines such as RNN and LSTM for wind and solar prediction tasks.

Building on this forecasting foundation, we develop a communication-efficient forecasting framework in which high-dimensional smart grid measurements are compressed into low-dimensional latent representations before transmission. This framework is extended into a task-oriented communication system that jointly optimizes data relevance and information timeliness, so that the receiver obtains compressed updates that remain useful for downstream forecasting tasks. Finally, we extend this framework to a distributed multi-node uplink setting, where multiple grid sensors share a bandwidth-limited channel, and develop scheduling policy that improves both the timeliness and task-relevance of received updates.


Pardaz Banu Mohammad

Towards Early Detection of Alzheimer’s Disease based on Speech using Reinforcement Learning Feature Selection

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Arvin Agah, Chair
David Johnson
Sumaiya Shomaji
Dongjie Wang
Sara Wilson

Abstract

Alzheimer’s Disease (AD) is a progressive, irreversible neurodegenerative disorder and the leading cause of dementia worldwide, affecting an estimated 55 million people globally. The window of opportunity for intervention is demonstrably narrow, making reliable early-stage detection a clinical and scientific imperative. While current diagnostic techniques such as neuroimaging and cerebrospinal fluid (CSF) biomarkers carry well-defined limitations in scalability, cost, and access equity, speech has emerged as a compelling non-invasive proxy for cognitive function evaluation.

This work presents a novel approach for using acoustic feature selection as a decision-making technique and implements it using deep reinforcement learning. Specifically, we use a Deep-Q-Network (DQN) agent to navigate a high dimensional feature space of over 6,000 acoustic features extracted using the openSMILE toolkit, dynamically constructing maximally discriminative and non-redundant features subsets. In order to capture the latent structural dependencies among

acoustic features which classifier and wrapper methods have difficulty to model, we introduce the Graph Convolutional Network (GCN) based correlation awareness feature representation layer that operates as an auxiliary input to the DQN state encoder. Post selection interpretability is reinforced through TF-IDF weighting and K-means clustering which together yield both feature level and cluster level explanations that are clinically actionable. The framework is evaluated across five classifiers, namely, support vector machines (SVM), logistic regression, XGBoost, random forest, and feedforward neural network. We use 10-fold stratified cross-validation on established benchmarks of datasets, including DementiaBank Pitt Corpus, Ivanova, and ADReSS challenge data. The proposed approach is benchmarked against state-of-the-art feature selection methods such as LASSO, Recursive feature selection, and mutual information selectors. This research contributes to three primary intellectual advances: (1) a graph augmented state representation that encodes inter-feature relational structure within a reinforcement learning agent, (2) a clinically interpretable pipeline that bridges the gap between algorithmic performance and translational utility, and (3) multilingual data approach for the reinforcement learning agent framework. This study has direct implications for equitable, low-cost and scalable AD screening in both clinical and community settings.


Zhou Ni

Bridging Federated Learning and Wireless Networks: From Adaptive Learning to FLdriven System Optimization

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Fengjun Li
Van Ly Nguyen
Han Wang
Shawn Keshmiri

Abstract

Federated learning (FL) has emerged as a promising distributed machine learning
framework that enables multiple devices to collaboratively train models without sharing raw
data, thereby preserving privacy and reducing the need for centralized data collection. However,
deploying FL in practical wireless environments introduces two major challenges. First, the data
generated across distributed devices are often heterogeneous and non-IID, which makes a single
global model insufficient for many users. Second, learning performance in wireless systems is
strongly affected by communication constraints such as interference, unreliable channels, and
dynamic resource availability. This PhD research aims to address these challenges by bridging
FL methods and wireless networks.
In the first thrust, we develop personalized and adaptive FL methods given the underlying
wireless link conditions. To this end, we propose channel-aware neighbor selection and
similarity-aware aggregation in wireless device-to-device (D2D) learning environments. We
further investigate the impacts of partial model update reception on FL performance. The
overarching goal of the first thrust is to enhance FL performance under wireless constraints.
Next, we investigate the opposite direction and raise the question: How can FL-based distributed
optimization be used for the design of next-generation wireless systems? To this end, we
investigate communication-aware participation optimization in vehicular networks, where
wireless resource allocation affects the number of clients that can successfully contribute to FL.
We further extend this direction to integrated sensing and communication (ISAC) systems,
where personalized FL (PFL) is used to support distributed beamforming optimization with joint
sensing and communication objectives.
Overall, this research establishes a unified framework for bridging FL and wireless networks. As
a future direction, this work will be extended to more realistic ISAC settings with dynamic
spectrum access, where communication, sensing, scheduling, and learning performance must be
considered jointly.


Past Defense Notices

Dates

Wenchi Ma

Object Detection and Classification based on Hierarchical Semantic Features and Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Bo Luo, Chair
Taejoon Kim
Prasad Kulkarni
Cuncong Zhong
Guanghui Wang

Abstract

The abilities of feature learning, semantic understanding, cognitive reasoning, and model generalization are the consistent pursuit for current deep learning-based computer vision tasks. A variety of network structures and algorithms have been proposed to learn effective features, extract contextual and semantic information, deduct the relationships between objects and scenes, and achieve robust and generalized model. Nevertheless, these challenges are still not well addressed. One issue lies in the inefficient feature learning and propagation, static single-dimension semantic memorizing, leading to the difficulty of handling challenging situations, such as small objects, occlusion, illumination, etc. The other issue is the robustness and generalization, especially when the data source has diversified feature distribution.  

The study aims to explore classification and detection models based on hierarchical semantic features ("transverse semantic" and "longitudinal semantic"), network architectures, and regularization algorithm, so that the above issues could be improved or solved. (1) A detector model is proposed to make full use of "transverse semantic", the semantic information in space scene, which emphasizes on the effectiveness of deep features produced in high-level layers for better detection of small and occluded objects. (2) We also explore the anchor-based detector algorithm and propose the location-aware reasoning (LAAR), where both the location and classification confidences are considered for the bounding box quality criterion, so that the best-qualified boxes can be picked up in Non-Maximum Suppression (NMS). (3) A semantic clustering-based deduction learning is proposed, which explores the "longitudinal semantic", realizing the high-level clustering in the semantic space, enabling the model to deduce the relations among various classes so as better classification performance is expected. (4) We propose the near-orthogonality regularization by introducing an implicit self-regularization to push the mean and variance of filter angles in a network towards 90° and 0° simultaneously, revealing it helps stabilize the training process, speed up convergence and improve robustness. (5) Inspired by the research that self-attention networks possess a strong inductive bias which leads to the loss of feature expression power, the transformer architecture with mitigatory attention mechanism is proposed and applied with the state-of-the-art detectors, verifying the superiority of detection enhancement. 


Sai Krishna Teja Damaraju

Strabospot 2

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Drew Davidson, Chair
Prasad Kulkarni
Douglas Walker


Abstract

Geology is a data-intensive field, but much of its current tooling is inefficient, labor intensive and tedious. While software solutions are a natural solution to these issues, careful consideration of domain-specific needs is required to make such a solution useful. Geology involves field work, collaboration, and a complex hierarchical data structure management to organize the data being captured.

 

    Strabospot was designed to address the above considerations. Strabospot is an application that helps earth scientists capture data, digitize it, and make it available over the world wide web for further research and development. Strabospot is a highly portable, effective, and efficient solution which can transform the field of Geology, affecting not only how the data is captured but also how that data can be further processed and analyzed. The initial implementation of Strabospot, while an important step forward in the field, has several limitations that necessitate a complete rewrite in the form of a second version, Strabospot 2.

 

    Strabospot 2 is a major software undertaking being developed at the University of Kansas through a collaboration between the Department of Geology and the Department of Electrical Engineering and Computer Sciences. This project elaborates on how Strabospot 2 helps the Geologists on the field, what challenges Geologists had with Strabospot and how Strabospot 2 fills in the deficits of Strabospot 1. Strabospot 2 is a large, multi-developer project. This project report focuses on the features implemented by the report author.


Patrick McNamee

Machine Learning for Aerospace Applications using the Blackbird Dataset

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Michael Branicky, Chair
Prasad Kulkarni
Ronald Barrett


Abstract

There is currently much interest in using machine learning (ML) models for vision-based object detection and navigation tasks in autonomous vehicles. For unmanned aerial vehicles (UAVs), and particularly small multi-rotor vehicles such as quadcopters, these models are trained on either unpublished data or within simulated environments, which leads to two issues: the inability to reliably reproduce results, and behavioral discrepancies on physical deployments resulting from unmodeled dynamics in the simulation environment. To overcome these issues, this project uses the Blackbird Dataset to explore integration of ML models for UAV. The Blackbird Dataset is overviewed to illustrate features and issues before investigating possible ML applications. Unsupervised learning models are used to determine flight-test partitions for training supervised deep neural network (DNN) models for nonlinear dynamic inversion. The DNN models are used to determine appropriate model choices over several network parameters including network layer depth, activation functions, epochs for training, and neural network regularization.


Charles Mohr

Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still despite these advances, in a waveform agile mode where the radar transmits unique waveforms for every pulse or a nonrepeating signal continuously, effective operation can be difficult due the waveform design requirements. In general, for radar waveforms to be both useful and physically robust they must achieve good autocorrelation sidelobes, be spectrally contained, and possess a constant amplitude envelope for high power operation. Meeting these design goals represents a tremendous computational overhead that can easily impede real-time operation and the overall effectiveness of the radar. This work addresses this concern in the context of random FM waveforms (RFM) which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a waveform agile mode. However, while they are effective, the approaches to design these waveforms require optimization of each individual waveform, making them subject to costly computational requirements.

 

This dissertation takes a different approach. Since RFM waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of an underlying stochastic process called a waveform generating function (WGF). This approach enables the convenient generation of spectrally contained RFM waveforms for little more computational cost than pulling numbers from a random number generator (RNG). To do so, this work translates the traditional mathematical treatment of random variables and random processes to a more radar centric perspective such that the WGFs can be analytically evaluated as a function of the usefulness of the radar waveforms that they produce via metrics such as the expected matched filter response and the expected power spectral density (PSD). Further, two WGF models denoted as pulsed stochastic waveform generation (Pulsed StoWGe) and continuous wave stochastic waveform generation (CW-StoWGe) are devised as means to optimize WGFs to produce RFM waveform with good spectral containment and design flexibility between the degree of spectral containment and autocorrelation sidelobe levels for both pulsed and CW modes. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function. The EFTE optimization is shown analytically using the metrics above, as well as others defined in this work and through simulation, to produce WGFs whose sample functions achieve these goals and thus produce useful random FM waveforms. To complete the theory-modeling-experimentation design life cycle, the resultant StoWGe waveforms are implemented in a loop-back configuration and are shown to be amenable to physical implementation.

 


David Menager

Event Memory for Intelligent Agents

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Arvin Agah, Chair
Michael Branicky
Prasad Kulkarni
Andrew Williams
Sarah Robins

Abstract

This dissertation presents a novel theory of event memory along with an associated computational model that embodies the claims of view which is integrated within a cognitive architecture. Event memory is a general-purpose storage for personal past experience. Literature on event memory reveals that people can remember events by both the successful retrieval of specific representations from memory and the reconstruction of events via schematic representations. Prominent philosophical theories of event memory, i.e., causal and simulationist theories, fail to capture both capabilities because of their reliance on a single representational format. Consequently, they also struggle with accounting for the full range of human event memory phenomena. In response, we propose a novel view that remedies these issues by unifying the representational commitments of the causal and simulation theories, thus making it a hybrid theory. We also describe an associated computational implementation of the proposed theory and conduct experiments showing the remembering capabilities of our system and its coverage of event memory phenomena. Lastly, we discuss our initial efforts to integrate our implemented event memory system into a cognitive architecture, and situate a tool-building agent with this extended architecture in the Minecraft domain in preparation for future event memory research.


Yiju Yang

Image Classification Based on Unsupervised Domain Adaptation Methods

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Taejoon Kim, Chair
Andrew Williams
Cuncong Zhong


Abstract

Convolutional Neural Networks (CNNs) have achieved great success in broad computer vision tasks. However, due to the lack of labeled data, many available CNN models cannot be widely used in many real scenarios or suffer from significant performance drop. To solve the problem of lack of correctly labeled data, we explored the capability of existing unsupervised domain adaptation (UDA) methods on image classification and proposed two new methods to improve the performance.

1. An Unsupervised Domain Adaptation Model based on Dual-module Adversarial Training: we proposed a dual-module network architecture that employs a domain discriminative feature module to encourage the domain invariant feature module to learn more domain invariant features. The proposed architecture can be applied to any model that utilizes domain invariant features for UDA to improve its ability to extract domain invariant features. Through the adversarial training by maximizing the loss of their feature distribution and minimizing the discrepancy of their prediction results, the two modules are encouraged to learn more domain discriminative and domain invariant features respectively. Extensive comparative evaluations are conducted and the proposed approach significantly outperforms the baseline method in all image classification tasks.

2. Exploiting maximum classifier discrepancy on multiple classifiers for unsupervised domain adaptation: The adversarial training method based on the maximum classifier discrepancy between the two classifier structures has been applied to the unsupervised domain adaptation task of image classification. This method is straightforward and has achieved very good results. However, based on our observation, we think the structure of two classifiers, though simple, may not explore the full power of the algorithm. Thus, we propose to add more classifiers to the model. In the proposed method, we construct a discrepancy loss function for multiple classifiers following the principle that the classifiers are different from each other. By constructing this loss function, we can add any number of classifiers to the original framework. Extensive experiments show that the proposed method achieves significant improvements over the baseline method.


Idhaya Elango

Detection of COVID-19 cases from chest X-ray images using COVID-NET, a deep convolutional neural network

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Prasad Kulkarni , Chair
Bo Luo
Heechul Yun


Abstract

COVID-19 is caused by the SARS-COV-2 contagious virus. It causes a devastating effect on the health of humans leading to high morbidity and mortality worldwide. Infected patients should be screened effectively to fight against the virus. Chest X-Ray (CXR) is one of the important adjuncts in the detection of visual responses related to SARS-COV-2 infection. Abnormalities in chest x-ray images are identified for COVID-19 patients. COVID-Net a deep convolutional neural network, is used here to detect COVID-19 cases from Chest X-ray images. COVIDX dataset used in this project is generated from five different open data access repositories. COVID-Net makes predictions using an explainability method to gain knowledge into critical factors related to COVID cases. We also perform quantitative and qualitative analyses to understand the decision-making behavior. 


Blake Bryant

A Secure and Reliable Network Latency Reduction Protocol for Real-Time Multimedia Applications

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Bo Luo
Reza Barati

Abstract

Multimedia networking is the area of study associated with the delivery of heterogeneous data including, but not limited to, imagery, video, audio, and interactive content. Multimedia and communication network researchers have continually struggled to devise solutions for addressing the three core challenges in multimedia delivery: security, reliability, and performance. Solutions to these challenges typically exist in a spectrum of compromises achieving gains in one aspect at the cost of one or more of the others. Networked videogames represent the pinnacle of multimedia presented in a real-time interactive format. Continual improvements to multimedia delivery have led to tools such as buffering, redundant coupling of low-resolution alternative data streams, congestion avoidance, and forced in-order delivery of best-effort service; however, videogames cannot afford to pay the latency tax of these solutions in their current state.

This dissertation aims to address these challenges through the application of a novel networking protocol, leveraging emerging technology such as block-chain enabled smart contracts, to provide security, reliability, and performance gains to distributed network applications. This work provides a comprehensive overview of contemporary networking approaches used in delivering videogame multimedia content and their associated shortcomings. Additionally, key elements of block-chain technology are identified as focal points for prospective solution development, notably through the application of distributed ledger technology, consensus mechanisms and smart contracts. Preliminary results from empirical evaluation of contemporary videogame networking applications have confirmed security and performance flaws existing in well-funded AAA videogame titles. Ultimately, this work aims to solve challenges that the videogame industry has struggled with for over a decade.

The broader impact of this research is to improve the real-time delivery of interactive multimedia content. Positive results in the area will have wide reaching effects in the future of content delivery, entertainment streaming, virtual conferencing, and videogame performance.


Alaa Daffalla

Security & Privacy Practices and Threat Models of Activists during a Political Revolution

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Alexandru Bardas, Chair
Fengjun Li
Bo Luo


Abstract

Activism is a universal concept that has often played a major role in putting an end to injustices and human rights abuses globally. Political activism in specific is a modern day term coined to refer to a form of activism in which a group of people come into collision with a more omnipotent adversary - national or international governments - who often has a purview and control over the very telecommunications infrastructure that is necessary for activists in order to organize and operate. As technology and social media use have become vital to the success of activism movements in the twenty first century, our study focuses on surfacing the technical challenges and the defensive strategies that activists employ during a political revolution. We find that security and privacy behavior and app adoption is influenced by the specific societal and political context in which activists operate. In addition, the impact of a social media blockade or an internet blackout can trigger a series of anti-censorship approaches at scale and cripple activists’ technology use. To a large extent the combination of low tech defensive strategies employed by activists were sufficient against the threats of surveillance, arrests and device confiscation. Throughout our results we surface a number of design principles but also some design tensions that could occur between the security and usability needs of different populations. And thus, we present a set of observations that can help guide technology designers and policy makers. 


Chiranjeevi Pippalla

Autonomous Driving Using Deep Learning Techniques

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
David Johnson, Co-Chair
Suzanne Shontz


Abstract

Recent advances in machine learning (ML), known as deep neural networks (DNN) or deep learning, have greatly improved the state-of-the-art for many ML tasks, such as image classification (He, Zhang, Ren, & Sun, 2016; Krizhevsky, Sutskever, & Hinton, 2012; LeCun, Bottou, Bengio, & Haffner, 1998; Szegedy et al., 2015; Zeiler & Fergus, 2014), speech recognition (Graves, Mohamed, & Hinton, 2013; Hannun et al., 2014; Hinton et al., 2012), complex games and learning from simple reward signals (Goodfellow et al., 2014; Mnih et al., 2015; Silver et al., 2016), and many other areas as well. NN and ML methods have been applied to the task of autonomously controlling a vehicle with only a camera image input to successfully navigate on road (Bojarski et al., 2016). However, advances in deep learning are not yet applied systematically to this task. In this work I used a simulated environment to implement and compare several methods for controlling autonomous navigation behavior using a standard camera input device to sense environmental state. The simulator contained a simulated car with a camera mounted on the top to gather visual data while being operated by a human controller on a virtual driving environment. The gathered data was used to perform supervised training for building an autonomous controller to drive the same vehicle remotely over a local connection. Reproduced past results that have used simple neural networks and other ML techniques to guide similar test vehicles using a camera. Compared these results with more complex deep neural network controllers, to see if they can improve navigation performance based on past methods on measures of speed, distance, and other performance metrics on unseen simulated road driving tasks.