Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 129 (Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.    

Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.          

Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems. 

This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing. 

Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Pramil Paudel

Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless Imaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao

Abstract

Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference. 

We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks. 


Sharmila Raisa

Digital Coherent Optical System: Investigation and Monitoring

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han

Abstract

Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.

We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.

After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.

In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.

Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.


Arman Ghasemi

Task-Oriented Data Communication and Compression for Timely Forecasting and Control in Smart Grids

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Alexandru Bardas
Prasad Kulkarni
Taejoon Kim
Zsolt Talata

Abstract

Advances in sensing, communication, and intelligent control have transformed power systems into data-driven smart grids, where forecasting and intelligent decision-making are essential components. Modern smart grids include distributed energy resources (DERs), renewable generation, battery energy storage systems, and large numbers of grid-edge devices that continuously generate time-series data. At the same time, increasing renewable penetration introduces substantial uncertainty in generation, net load, and market operations, while communication networks impose bandwidth, latency, and reliability constraints on timely data delivery. This dissertation addresses how time-series forecasting, data compression, and task-oriented wireless communication can be jointly designed for smart grid applications.

First, we study weather-aware distributed energy management in prosumer-centric microgrids and show that incorporating day-ahead weather information into decision-making improves battery dispatch and reduces the impact of renewable uncertainty. Second, we introduce forecasting-aware energy management in both wholesale and retail electricity markets, highlighting how renewable generation forecasting affects pricing, scheduling, and uncertainty mitigation. Third, we develop and evaluate deep learning methods for renewable generation forecasting, showing that Transformer-based models outperform recurrent baselines such as RNN and LSTM for wind and solar prediction tasks.

Building on this forecasting foundation, we develop a communication-efficient forecasting framework in which high-dimensional smart grid measurements are compressed into low-dimensional latent representations before transmission. This framework is extended into a task-oriented communication system that jointly optimizes data relevance and information timeliness, so that the receiver obtains compressed updates that remain useful for downstream forecasting tasks. Finally, we extend this framework to a distributed multi-node uplink setting, where multiple grid sensors share a bandwidth-limited channel, and develop scheduling policy that improves both the timeliness and task-relevance of received updates.


Pardaz Banu Mohammad

Towards Early Detection of Alzheimer’s Disease based on Speech using Reinforcement Learning Feature Selection

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Arvin Agah, Chair
David Johnson
Sumaiya Shomaji
Dongjie Wang
Sara Wilson

Abstract

Alzheimer’s Disease (AD) is a progressive, irreversible neurodegenerative disorder and the leading cause of dementia worldwide, affecting an estimated 55 million people globally. The window of opportunity for intervention is demonstrably narrow, making reliable early-stage detection a clinical and scientific imperative. While current diagnostic techniques such as neuroimaging and cerebrospinal fluid (CSF) biomarkers carry well-defined limitations in scalability, cost, and access equity, speech has emerged as a compelling non-invasive proxy for cognitive function evaluation.

This work presents a novel approach for using acoustic feature selection as a decision-making technique and implements it using deep reinforcement learning. Specifically, we use a Deep-Q-Network (DQN) agent to navigate a high dimensional feature space of over 6,000 acoustic features extracted using the openSMILE toolkit, dynamically constructing maximally discriminative and non-redundant features subsets. In order to capture the latent structural dependencies among

acoustic features which classifier and wrapper methods have difficulty to model, we introduce the Graph Convolutional Network (GCN) based correlation awareness feature representation layer that operates as an auxiliary input to the DQN state encoder. Post selection interpretability is reinforced through TF-IDF weighting and K-means clustering which together yield both feature level and cluster level explanations that are clinically actionable. The framework is evaluated across five classifiers, namely, support vector machines (SVM), logistic regression, XGBoost, random forest, and feedforward neural network. We use 10-fold stratified cross-validation on established benchmarks of datasets, including DementiaBank Pitt Corpus, Ivanova, and ADReSS challenge data. The proposed approach is benchmarked against state-of-the-art feature selection methods such as LASSO, Recursive feature selection, and mutual information selectors. This research contributes to three primary intellectual advances: (1) a graph augmented state representation that encodes inter-feature relational structure within a reinforcement learning agent, (2) a clinically interpretable pipeline that bridges the gap between algorithmic performance and translational utility, and (3) multilingual data approach for the reinforcement learning agent framework. This study has direct implications for equitable, low-cost and scalable AD screening in both clinical and community settings.


Zhou Ni

Bridging Federated Learning and Wireless Networks: From Adaptive Learning to FLdriven System Optimization

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Fengjun Li
Van Ly Nguyen
Han Wang
Shawn Keshmiri

Abstract

Federated learning (FL) has emerged as a promising distributed machine learning
framework that enables multiple devices to collaboratively train models without sharing raw
data, thereby preserving privacy and reducing the need for centralized data collection. However,
deploying FL in practical wireless environments introduces two major challenges. First, the data
generated across distributed devices are often heterogeneous and non-IID, which makes a single
global model insufficient for many users. Second, learning performance in wireless systems is
strongly affected by communication constraints such as interference, unreliable channels, and
dynamic resource availability. This PhD research aims to address these challenges by bridging
FL methods and wireless networks.
In the first thrust, we develop personalized and adaptive FL methods given the underlying
wireless link conditions. To this end, we propose channel-aware neighbor selection and
similarity-aware aggregation in wireless device-to-device (D2D) learning environments. We
further investigate the impacts of partial model update reception on FL performance. The
overarching goal of the first thrust is to enhance FL performance under wireless constraints.
Next, we investigate the opposite direction and raise the question: How can FL-based distributed
optimization be used for the design of next-generation wireless systems? To this end, we
investigate communication-aware participation optimization in vehicular networks, where
wireless resource allocation affects the number of clients that can successfully contribute to FL.
We further extend this direction to integrated sensing and communication (ISAC) systems,
where personalized FL (PFL) is used to support distributed beamforming optimization with joint
sensing and communication objectives.
Overall, this research establishes a unified framework for bridging FL and wireless networks. As
a future direction, this work will be extended to more realistic ISAC settings with dynamic
spectrum access, where communication, sensing, scheduling, and learning performance must be
considered jointly.


Past Defense Notices

Dates

Brian McClannahan

Classification of Noncoding RNA Families using Deep Convolutional Neural Network

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Richard Wang

Abstract

In the last decade, the discovery of noncoding RNA (ncRNA) has exploded. Classifying these ncRNA is critical to determining their function. This thesis proposes a new method employing deep convolutional neural networks (CNNs) to classify ncRNA sequences. To this end, this thesis first proposes an efficient approach to convert the RNA sequences into images characterizing their base-pairing probability. As a result, classifying RNA sequences is converted to an image classification problem that can be efficiently solved by available CNN-based classification models. This thesis also considers the folding potential of the ncRNAs in addition to their primary sequence. Based on the proposed approach, a benchmark image classification dataset is generated from the RFAM database of ncRNA sequences. In addition, three classical CNN models and three Siamese network models have been implemented and compared to demonstrate the superior performance and efficiency of the proposed approach. Extensive experimental results show the great potential of using deep learning approaches for RNA classification.


Waqar Ali

Deterministic Scheduling of Real-Time Tasks on Heterogeneous Multicore Platforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Heechul Yun, Chair
Esam Eldin Mohamed Aly
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri

Abstract

In recent years, the problem of real-time scheduling has increasingly become more important as well as more complicated. The former is due to the proliferation of safety critical systems into our day-to-day life and the latter is caused by the escalating demand for high performance which is driving the multicore architecture towards consolidation of various kinds of heterogeneous computing resources into smaller and smaller SoCs. Motivated by these trends, this dissertation tackles the following fundamental question: how can we guarantee predictable real-time execution while preserving high utilization on heterogeneous multicore SoCs?

 

This dissertation presents new real-time scheduling techniques for predictable and efficient scheduling of mixed criticality workloads on heterogeneous SoCs. The contributions of this dissertation include the following: 1) a novel CPU-GPU scheduling framework, called BWLOCK++, that ensures predictable execution of critical GPU kernels on integrated CPU-GPU platforms 2) a novel gang scheduling framework called RT-Gang, which guarantees deterministic execution of parallel real-time tasks on the multicore CPU cluster of a heterogeneous SoC. 3) optimal and heuristic algorithms for gang formation that increase real-time schedulability under the RT-Gang framework and their extension to incorporate scheduling on accelerators in a heterogenous SoC. 4) A case-study evaluation using an open-source autonomous driving application that demonstrates the analytical and practical benefits of the proposed scheduling techniques.


Josiah Gray

Implementing TPM Commands in the Copland Remote Attestation Language

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Perry Alexander, Chair
Andy Gill
Bo Luo


Abstract

So much of what we do on a daily basis is dependent on computers: email, social media, online gaming, banking, online shopping, virtual conference calls, and general web browsing to name a few. Most of the devices we depend on for these services are computers or servers that we do not own, nor do we have direct physical access to. We trust the underlying network to provide access to these devices remotely. But how do we know which computers/servers are safe to access, or verify that they are who they claim to be? How do we know that a distant server has not been hacked and compromised in some way?

Remote attestation is a method for establishing trust between remote systems. An "appraiser" can request information from a "target" system. The target responds with "evidence" consisting of run-time measurements, configuration information, and/or cryptographic information (i.e. hashes, keys, nonces, or other shared secrets). The appraiser can then evaluate the returned evidence to confirm the identity of the remote target, as well as determine some information about the operational state of the target, to decide whether or not the target is trustworthy.

A tool that may prove useful in remote attestation is the TPM, or "Trusted Platform Module". The TPM is a dedicated microcontroller that comes built-in to nearly all PC and laptop systems produced today. The TPM is used as a root of trust for storage and reporting, primarily through integrated cryptographic keys. This root of trust can then be used to assure the integrity of stored data or the state of the system itself. In this thesis, I will explore the various functions of the TPM and how they may be utilized in the development of the remote attestation language, "Copland".


Gordon Ariho

Multipass SAR Processing for Ice Sheet Vertical Velocity and Tomography Measurements and Application of Reduced Rank MMSE to Spectrally Efficient Radar Design

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Jim Stiles, Chair
John Paden (Co-Chair)
Shannon Blunt
Carl Leuschen
Emily Arnold

Abstract

First Topic: Ice sheets impact sea-level change and hence their response to climatic variations needs to be continually monitored and studied. We propose to apply multipass differential interferometric synthetic aperture radar (DInSAR) techniques to data from the Multichannel Coherent Radar Depth Sounder (MCoRDS) to measure the vertical displacement of englacial layers within an ice sheet. DInSAR’s accuracy is usually on the order of a small fraction of the wavelength (e.g. millimeter to centimeter precision is common) in monitoring ground displacement along the radar line of sight (LOS).  In the case of ice sheet internal layers, vertical displacement is estimated by compensating for the spatial baseline using precise trajectory information and estimates of the cross-track layer slope from direction of arrival analysis. Preliminary results from a high accumulation region near Camp Century in northwest Greenland and Summit Station in central Greenland are presented here. We propose to extend this work by implementing a maximum likelihood estimator that jointly estimates the vertical velocity, the cross-track internal layer slope, and the unknown baseline error due to GPS and INS errors. The multipass algorithm will be applied to additional flights from the decade long NASA Operation IceBridge airborne mission that flew MCoRDS on many repeated flight tracks. We also propose to improve the accuracy of tomographic swaths produced from multipass measurements and investigate the possibility to use focusing matrices to improve wideband tomographic processing.

Second Topic: With the increased demand for bandwidth-hungry applications in the telecommunications industry, radar applications can no longer enjoy the generous frequency allocations within the UHF band. Spectral efficiency, if achievable, leads to the freeing of portions of the radar bandwidth to facilitate spectrum sharing between radar and other wireless systems. A decrease in bandwidth leads to worse radar resolution. In certain scenarios, reduced resolution is acceptable, and bandwidth may be compromised for spectral efficiency. An iterative reduced rank MMSE algorithm based on marginal Fisher information is proposed and investigated to minimize the loss of resolution with the tradeoff of degraded side-lobe performance. The algorithm is applied to the radar measurement model with simulated range profiles and performance results discussed.


Kishanram Kaje

Complex Field Modulation in Direct Detection Systems

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Rongqing Hui, Chair
Christopher Allen
Victor Frost
Erik Perrins
Jie Han

Abstract

 Even though fiber optics communication is providing a high bandwidth channel to achieve high speed data transmission, there is still a need for higher spectral efficiency, faster data processing speeds while reduced resource requirements due to ever increasing data and media traffic. Various multilevel modulation and demodulation techniques are used to improve spectral efficiency. Although, spectral efficiency is improved, there are other challenges that arise while doing so such as requirement for high speed electronics, receiver sensitivity, chromatic dispersion, operational flexibility etc. Here, we investigate complex high speed field modulation techniques in direct detection systems to improve spectral efficiency while focusing to reduce resources required for implementation, compensating for linear and nonlinear impairments in fiber optics communication systems.

We first demonstrated a digital-analog hybrid subcarrier multiplexing (SCM) technique which can reduce the requirement of high speed electronics such as ADC and DAC, while providing wideband capability, high spectral efficiency, operational flexibility and controllable data-rate granularity.

With conventional Quadrature Phase Shift Keying (QPSK), to achieve maximum spectral efficiency, we need high spectral efficient Nyquist filters which takes high FPGA resources for digital signal processing (DSP). Hence, we investigated Quadrature Duobinary (QDB) modulation as a solution to reduce the FPGA resources required for DSP while achieving spectral efficiency of 2bits/s/Hz. Currently we are investigating all analog single sideband (SSB) complex field modulated direct detection system. Here, we are trying to achieve higher spectral efficiency by using QDB modulation scheme in comparison to QPSK while avoiding signal-signal beat interference (SSBI) by providing a guard-band based approach.

Another topic we investigated, both through simulation and experiments, is a way to compensate for nonlinearities generated by semiconductor optical amplifiers (SOA) when operated in gain saturation in a field modulated direct detection systems. We successfully, compensated for the SOA nonlinearities in the presence of fiber chromatic dispersion, which was post compensated using electronic dispersion compensation after restoring the phase information of the received signal using Kramers-Kronig receiver.


Theresa Moore

Array Manifold Calibration for Multichannel SAR Sounders

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

James Stiles, Chair
John Paden (Co-Chair)
Shannon Blunt
Carl Leuschen
Leigh Stearns

Abstract

Multichannel synthetic aperture radar (SAR) sounders with cross-track antenna arrays map ice sheet basal morphology in three dimensions with a single pass using tomography.  The tomographic ice-sheet imaging method leverages parametric direction-finding techniques like the Maximum Likelihood Estimator and the Multiple Signal Classification algorithm to resolve scattering interfaces in elevation.  These techniques have received considerable attention because of their potential to exceed the Rayleigh resolution limit of the receive array under certain conditions.  This performance is predicated on having perfect knowledge of the frequency-dependent response of the array to directional sources, referred to as the array manifold.  Even modest amounts of mismatch between the assumed and actual manifold model degrade the accuracy of parametric angle estimators and erode their sought-after superresolution potential.

 

Array manifold calibration refers to the step in the array processor of refining our representation of the directional array-response vectors by accounting for factors such as mutual coupling, geometric uncertainties, and channel-to-channel gain imbalances.  Pilot calibration requires measuring the in-situ array over its field of view and storing the manifold in a look-up-table.  Alternatively, the array transfer function may be modeled parametrically to levy an estimation framework for characterizing mismatch.  Parametric calibration theory for sensor position perturbations has been established for several decades.  However, there remains a marked disconnect between the signal processing and antennas communities regarding how to include mutual coupling within the parametric framework.  To date, literature lacks validated studies that address parameterization of the embedded element patterns for direction-finding arrays.

 

A manifold calibration methodology is proposed for an airborne, multichannel ice-penetrating SAR.  The methodology departs from conventional approaches by extracting calibration targets from SAR imagery of well-understood terrain to empirically characterize the directional responses of the integrated array's embedded element patterns.  This work presents a Maximum Likelihood Estimator for nonlinear parameters common across disjoint calibration sets that has the potential to improve the accuracy of our estimated geometric uncertainties by increasing the total Fisher information in our observations.  The investigation contributes to specific gaps in array signal processing and remote sensing literature by treating the unique challenge of calibrating in-situ arrays used in direction-finding applications.


Dung Viet Nguyen

Particle Swarm Deep Reinforcement Learning for Base Station Optimization in Urban Areas

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Taejoon Kim, Chair
Morteza Hashemi
Heechul Yun


Abstract

Densifying the network by deploying many small cells has attracted significant interests from wireless industries for exploring its potential to facilitating the proposed many data-intensive use cases in fifth-generation (5G) networks. While such efforts are essential, there are gaps in fundamental research and practical deployment of small cells. It is clear that increased interference from adjacent cells, called intercell interference, is the major limiting factor.  In order to address this issue, each base station's parameters should be properly controlled to mitigate the intercell interference. We call the task of designing the base station's parameters the base station optimization (BSO) problem in this work. Due to the large numbers of small cells and mobile users distributed over the network, solving BSO by precisely modeling the network conditions is almost infeasible. One of the popular approaches that has attracted many researchers recently is a data-based framework called machine learning (ML). While supervised ML is prevalent, it requires pre-labeled off-line data that are not available in many wireless scenarios. Unlike supervised ML, reinforcement learning (RL) can handle this situation because it is based on designing a good policy to find the best exploration-\&-exploitation tradeoff without the pre-labeled training dataset. Thus, in this work, we present a new approach to the problem of BSO, based on the application of deep reinforcement learning (DRL) to enhance the quality of service (QoS) experienced by mobile users. To speed up the exploration of DRL, we employ particle swarm optimization (PSO), which shows improved QoS and convergence compared to conventional DRL.


Dalton Hahn

Delving Into DevOps: Examining the Security Posture of State-of-Art Service Mesh Tools

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li


Abstract

Explosion in the use of containers and a shift in software engineering design from monolithic applications into a microservice model has driven a need for software solutions that can manage the deployment and networking of microservices at enterprise-level scale. Service meshes have emerged as a promising solution to the microservice eruption that enterprise software is currently experiencing. This work examines service meshes from the perspective of security solutions offered within the tools and how the available security mechanisms impact the original goals of service meshes. As part of this study, we propose a relevant threat model to the service mesh domain and consider two different levels of configuration of these tools. The first configuration we study is the “idealized” configuration; one in which a system administrator has deep knowledge and the ability to properly configure and enable all available security mechanisms within a service mesh. The second configuration scenario is that of the default configuration deployment of service meshes. Through this work, we consider a range of adversarial approaches and scenarios that comprehensively cover the available attack surface of service meshes. Our experimental results show a distinct lack in security support of service meshes when deployed under default configurations, and additionally, in many idealized scenarios studied, it is possible for attackers to achieve some of their adversarial goals, presenting tempting targets to attackers.


Calen Carabajal

Development of Compact UWB Transmit Receive Modules and Filters on Liquid Crystal Polymer for Radar

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Carl Leuschen, Chair
Fernando Rodriguez-Morales (Co-Chair)
Christopher Allen


Abstract

Microwave and mm-wave radars have been used extensively for remote sensing applications, and ultra-wideband (UWB) radars have provided particular utility in geophysical research due to their ability to resolve narrowly-spaced targets or media interface levels. With increased availability of unmanned aerial vehicles (UAS) and expanded application of microwave radars into other realms requiring portability, miniaturization of radar systems is a crucial goal. This thesis presents the design and development of various microwave components for a compact, airborne snow-probing radar with multi-gigahertz bandwidth and cm-scale vertical resolution.

 
First, a set of UWB, compact transmit and receive modules with custom power sequencing circuits is presented. These modules were rapid-prototyped as an initial step toward the miniaturization of the radar’s front-end, using a combination of custom and COTS circuits. The transmitter and receiver modules operate in the 2–18 GHz range. Laboratory and field tests are discussed, demonstrating performance that is comparable to previous, connectorized implementations of the system, while accomplishing a 5:1 size reduction.

 

Next, a set of miniaturized band-pass and low-pass filters is developed and demonstrated. This work addressed the lack of COTS circuits with adequate performance in a sufficiently small form factor that is compatible with the planar integration required in a multi-chip module.

 

The filters presented here were designed for manufacture on a multi-layer liquid crystal polymer (LCP) substrate. A detailed trade study to assess the effects of potential manufacturing tolerances is presented. A framework for the automated creation of panelized design variations was developed using CAD tools. Thirty-two design variations with two different types of launches (microstrip and grounded co-planar waveguide) were successfully simulated, fabricated and tested, showing good electrical performance both as individual filters and cascaded to offer outstanding out-of-band rejection. The size of the new filters is 1 cm x 1 cm x 150 µm, a vertical reduction of over 90% and a reduction of total cascaded length by over 80%.


Kunal Karnik

Augment drone GPS telemetry data onto its Optical Flow lines

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Andy Gill, Chair
Drew Davidson
Prasad Kulkarni


Abstract

Optical flow is the apparent displacement of objects, surfaces and edges in a visual scene caused because of the relative motion between an observer and the scene. This apparent displacement (parallax) is used to render optical flow lines for such objects which hold invaluable information about their motion. In this research, we apply this technique to study a video file. We will locate pixels of objects with strong optical flow displacements. Which will enable us to identify an aerial multirotor craft (drone) from possible object pixels. Further we will not only mark the drones path using optical flow lines, but we will also add value to the video file by augmenting the drone’s 3D Telemetry data onto its optical flow lines.