Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

As machine learning (ML), artificial intelligence (AI), and deep learning continue to advance, their applications become more diverse – one such application is synthetic aperture radar (SAR) automatic target recognition (ATR). These SAR ATR networks use different forms of deep learning such as convolutional neural networks (CNN) to classify targets in SAR imagery. An emerging research area of SAR is dual function radar communication (DFRC) which performs both radar and communications functions using a single co-designed modulation. The utilization of DFRC emissions for SAR imaging impacts image quality, thereby influencing SAR ATR network training. Here, using the Civilian Vehicle Data Dome dataset from the AFRL, SAR ATR networks are trained and evaluated with simulated data generated using Gaussian Minimum Shift Keying (GMSK) and Linear Frequency Modulation (LFM) waveforms. The networks are used to compare how the target classification accuracy of the ATR network differ between DFRC (i.e., GMSK) and baseline (i.e., LFM) emissions. Furthermore, as is common in pulse-agile transmission structures, an effect known as ’range sidelobe modulation’ is examined, along with its impact on SAR ATR. Finally, it is shown that SAR ATR network can be trained for GMSK emissions using existing LFM datasets via two types of data augmentation.


Past Defense Notices

Dates

Naveed Mahmud

Towards Complete Emulation of Quantum Algorithms using High-Performance Reconfigurable Computing

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Prasad Kulkarni
Heechul Yun
Tyrone Duncan

Abstract

Quantum computing is a promising technology that can potentially demonstrate supremacy over classical computing in solving specific problems. At present, two critical challenges for quantum computing are quantum state decoherence, and low scalability of current quantum devices. Decoherence places constraints on realistic applicability of quantum algorithms as real-life applications usually require complex equivalent quantum circuits to be realized. For example, encoding classical data on quantum computers for solving I/O and data-intensive applications generally requires quantum circuits that violate decoherence constraints. In addition, current quantum devices are of small-scale having low quantum bit(qubit) counts, and often producing inaccurate or noisy measurements, which also impacts the realistic applicability of real-world quantum algorithms. Consequently, benchmarking of existing quantum algorithms and investigation of new applications are heavily dependent on classical simulations that use costly, resource-intensive computing platforms. Hardware-based emulation has been alternatively proposed as a more cost-effective and power-efficient approach. This work proposes a hardware-based emulation methodology for quantum algorithms, using cost-effective Field-Programmable Gate-Array(FPGA) technology. The proposed methodology consists of three components that are required for complete emulation of quantum algorithms; the first component models classical-to-quantum(C2Q) data encoding, the second emulates the behavior of quantum algorithms, and the third models the process of measuring the quantum state and extracting classical information, i.e., quantum-to-classical(Q2C) data decoding. The proposed emulation methodology is used to investigate and optimize methods for C2Q/Q2C data encoding/decoding, as well as several important quantum algorithms such as Quantum Fourier Transform(QFT), Quantum Haar Transform(QHT), and Quantum Grover’s Search(QGS). This work delivers contributions in terms of reducing complexities of quantum circuits, extending and optimizing quantum algorithms, and developing new quantum applications. For higher emulation performance and scalability of the framework, hardware design techniques and hardware architectural optimizations are investigated and proposed. The emulation architectures are designed and implemented on a high-performance-reconfigurable-computer(HPRC), and proposed quantum circuits are implemented on a state-of-the-art quantum processor. Experimental results show that the proposed hardware architectures enable emulation of quantum algorithms with higher scalability, higher accuracy, and higher throughput, compared to existing hardware-based emulators. As a case study, quantum image processing using multi-spectral images is considered for the experimental evaluations. 


Cecelia Horan

Open-Source Intelligence Investigations: Development and Application of Efficient Tools

When & Where:


2001B Eaton Hall

Committee Members:

Hossein Saiedian, Chair
Drew Davidson
Fengjun Li


Abstract

Open-source intelligence is a branch within cybercrime investigation that focuses on information collection and aggregation. Through this aggregation, investigators and analysts can analyze the data for connections relevant to the investigation. There are many tools that assist with information collection and aggregation. However, these often require enterprise licensing. A solution to enterprise licensed tools is using open-source tools to collect information, often by scraping websites. These tools provide useful information, but they provide a large number of disjointed reports. The framework we developed automates information collection, aggregates these reports, and generates one single graphical report. By using a graphical report, the time required for analysis is also reduced. This framework can be used for different investigations. We performed a case study regarding the performance of the framework with missing person case information. It showed a significant improvement in the time required for information collection and report analysis. 


Ishrak Haye

Invernet: An Adversarial Attack Framework to Infer Downstream Context Distribution Through Word Embedding Inversion

When & Where:


Nichols Hall, Room 246

Committee Members:

Bo Luo, Chair
Zijun Yao, Co-Chair
Alex Bardas
Fengjun Li

Abstract

Word embedding has become a popular form of data representation that is used to train deep neural networks in many natural

language processing tasks, such as Machine Translation, Question Answer Generation, Named Entity Recognition, Next

Word/Sentence Prediction etc. With embedding, each word is represented as a dense vector which captures its semantic relationship

with other words and can better empower Machine Learning models to achieve state-of-the-art performance.

However, due to the memory and time intensive nature of learning such word embeddings, transfer learning has emerged as a

common practice to warm start the training process. As a result, an efficient way is to initialize with pretrained word vectors and then

fine-tune those on downstream domain specific smaller datasets. This study aims to find whether we can infer the contextual

distribution (i.e., how words cooccur in a sentence driven by syntactic regularities) of the downstream datasets given that we have

access to the embeddings from both pre-training and fine-tuning processes.

In this work, we propose a focused sampling method along with a novel model inversion architecture “Invernet” to invert word

embeddings into the word-to-word context information of the fine-tuned dataset. We consider the popular word2Vec models

including CBOW, SkipGram, and GloVe based algorithms with various unsupervised settings. We conduct extensive experimental

study on two real-world news datasets: Antonio Gulli’s News Dataset from Hugging Face repository and a New York Times dataset

from both quantitative and qualitative perspectives. Results show that “Invernet” has been able to achieve an average F1 score of 0.75

and an average AUC score of 0.85 in an attack scenario.

A concerning pattern from our experiments reveal that embedding models that are generally considered superior in different tasks

tend to be more vulnerable to model inversion. Our results suggest that a significant amount of context distribution information from

the downstream dataset can potentially leak if an attacker gets access to the pretrained and fine-tuned word embeddings. As a result,

attacks using “Invernet” can jeopardize the privacy of the users whose data might have been used to fine-tune the word embedding

model.


Sohaib Kiani

Designing Secure and Robust Machine Learning Models

When & Where:


Nichols Hall, Room 250, Gemini Room

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Cuncong Zhong
Xuemin Tu

Abstract

With the growing computational power and the enormous data available from many sectors, applications with machine learning (ML) components are widely adopted in our everyday lives. One major drawback associated with ML models is hard to guarantee same performance with changing environment. Since ML models are not traditional software that can be tested end-to-end. ML models are vulnerable against distributional shifts and cyber-attacks. Various cyber-attacks against deep neural networks (DNN) have been proposed in the literature, such as poisoning, evasion, backdoor, and model inversion. In the evasion attacks against DNN, the attacker generates adversarial instances that are visually indistinguishable from benign samples and sends them to the target DNN to trigger misclassifications.

In our work, we proposed a novel multi-view adversarial image detector, namely ‘Argos’, based on a novel observation. That is, there exist two” souls” in an adversarial instance, i.e., the visually unchanged content, which corresponds to the true label, and the added invisible perturbation, which corresponds to the misclassified label. Such inconsistencies could be further amplified through an autoregressive generative approach that generates images with seed pixels selected from the original image, a selected label, and pixel distributions learned from the training data. The generated images (i.e., the “views”) will deviate significantly from the original one if the label is adversarial, demonstrating inconsistencies that ‘Argos’ expects to detect. To this end, ‘Argos’ first amplifies the discrepancies between the visual content of an image and its misclassified label induced by the attack using a set of regeneration mechanisms and then identifies an image as adversarial if the reproduced views deviate to a preset degree. Our experimental results show that ‘Argos’ significantly outperforms two representative adversarial detectors in both detection accuracy and robustness against six well-known adversarial attacks.


Timothy Barclay

Proof-Producing Synthesis of CakeML from Coq

When & Where:


Nichols Hall, Room 246

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Matthew Moore
Eileen Nutting

Abstract

Coq's extraction plugin is used to produce code of a general purpose

  programming language from a specification written in the Calculus of Inductive

  Constructions (CIC). Currently, this mechanism is trusted, since there is no

  formal connection between the synthesized code and the CIC terms it originated

  from. This comes from a lack of formal specifications for the target

  languages: OCaml, Haskell, and Scheme. We intend to use the formally specified

  CakeML language as an extraction target, and generate a theorem in Coq that

  relates the generated CakeML abstract syntax to the CIC terms it is generated

  from. This work expands on the techniques used in the HOL4 translator from

  Higher Order Logic to CakeML. The HOL4 translator also allows for the

  generation of stateful code from the state and exception monad. We expand on

  their techniques by extracting terms with dependent types, and generating

  stateful code for other kinds of monads, like the reader monad, depending on

  what kind of computation the monad intends to represent.


Grant Jurgensen

A Verified Architecture for Trustworthy Remote Attestation

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Perry Alexander, Chair
Drew Davidson
Matthew Moore


Abstract

Remote attestation is a process where one digital system gathers and provides evidence of its state and identity to an external system. For this process to be successful, the external system must find the evidence convincingly trustworthy within that context. Remote attestation is difficult to make trustworthy due to the external system’s limited access to the attestation target. In contrast to local attestation, the appraising system is unable to directly observe and oversee the attestation target. In this work, we present a system architecture design and prototype implementation that we claim enables trustworthy remote attestation. Furthermore, we formally model the system within a temporal logic embedded in the Coq theorem prover and present key theorems that strengthen this trust argument.


Kaidong Li

Accurate and Robust Object Detection and Classification Based on Deep Neural Networks

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Cuncong Zhong, Chair
Taejoon Kim
Fengjun Li
Bo Luo
Haiyang Chao

Abstract

Recent years have seen tremendous developments in the field of computer vision and its extensive applications. The fundamental task, image classification, benefiting from deep convolutional neural networks (CNN)'s extraordinary ability to extract deep semantic information from input data, has become the backbone for many other computer vision tasks, like object detection and segmentation. A modern detection usually has bounding-box regression and class prediction with a pre-trained classification model as the backbone. The architecture is proven to produce good results, however, improvements can be made with closer inspections. A detector takes a pre-trained CNN from the classification task and selects the final bounding boxes from multiple proposed regional candidates by a process called non-maximum suppression (NMS), which picks the best candidates by ranking their classification confidence scores. The localization evaluation is absent in the entire process. Another issue is the classification uses one-hot encoding to label the ground truth, resulting in an equal penalty for misclassifications between any two classes without considering the inherent relations between the classes.

My research aims to address the following issues. (1) We proposed the first location-aware detection framework for single-shot detectors that can be integrated into any single-shot detectors. It boosts detection performance by calibrating the ranking process in NMS with localization scores. (2) To more effectively back-propagate gradients, we designed a super-class guided architecture that consists of a superclass branch (SCB) and a finer class branch (FCB). To further increase the effectiveness, the features from SCB with high-level information are fed to FCB to guide finer class predictions. (3) Recent works have shown 3D point cloud models are extremely vulnerable under adversarial attacks, which poses a serious threat to many critical applications like autonomous driving and robotic controls. To increase the robustness of CNN models on 3D point cloud models, we propose a family of robust structured declarative classifiers for point cloud classification, where the internal constrained optimization mechanism can effectively defend adversarial attacks through implicit gradients.


Christian Daniel

Dynamic Metasurface Grouping for IRS Optimization in Massive MIMO Communications

When & Where:


246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Taejoon Kim, Co-Chair
Morteza Hashemi


Abstract

Intelligent Reflecting Surfaces (IRSs) grant the ability to control what was once considered the uncontrollable part of wireless communications, the channel. These smart signal mirrors show promise to significantly improve the effective signal-to-noise-ratio (SNR) of cell-users when the line-of-sight (LOS) channel between the base station (BS) and user is blocked. IRSs use implementable optimized phase shifts that beamform a reflected signal around channel blockages, and because they are passive devices, they have the benefit of having low cost and low power consumption. Previous works have concluded that IRSs need several hundred elements to outperform relays. Unfortunately, overhead and complexity costs related to optimizing these devices limit their scope to single-input single-output (SISO) systems. With multiple-input multiple-output (MIMO) and Massive MIMO becoming crucial components to modern 5G and beyond networks, a way to mitigate these overhead costs and integrate IRS technology with the promising MIMO techniques is paramount for these devices to have a place within modern cell technologies. This thesis proposes an IRS element grouping scheme that greatly reduces the number of unique IRS phases that need to be calculated and sent to the IRS controller via the limited rate feedback channel and allows for the ideal number of groups to be obtained at the BS before data transmission. Three methods are proposed to design the phase shifts and element partitioning within our scheme to improve effective SNR in an IRS-aided system. In our simulations, it is shown that our best performing method is one that dynamically partitions the IRS elements into non- uniform groups based on information gathered from the reflected channel and then optimizes its phase shifts. This method successfully handles the overhead trade-off problem, and shows significant achievable rate improvement from previous works.


Theresa Moore

Array Manifold Calibration for Multichannel SAR Sounders

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

James Stiles, Chair
Shannon Blunt
Carl Leuschen
John Paden
Leigh Stearns

Abstract

Multichannel synthetic aperture radar (SAR) ice sounders rely on parametric angle estimators in tomography to resolve elevation angle beyond the Rayleigh resolution limit of their cross-track arrays. The potential super resolution capability of these techniques is predicated on perfect knowledge of the array’s response to directional sources, referred to as the array manifold. Array manifold calibration improves angle estimator performance by reducing the mismatch between the model of the array’s transfer function and truth; its study straddles the fields of both signal processing and antenna theory, yet associated literature reveals dichotomous methodologies that perpetuate fragmented interpretations of the manifold calibration problem. This dissertation addresses calibration for SAR ice sounders that three dimensionally image ice sheet and glacier beds with tomographic techniques. The approach is rooted in array signal processing first but seeks a more unifying perspective of the manifold calibration problem by leveraging commercial computational electromagnetics software to understand error mechanisms and algorithm performance with a deterministic model of an electromagnetic manifold. The research outlined here proposes creation of large snapshot databases that aid in identifying calibration targets in SAR pixels with known arrival angles. The signal processing methodology taxonomizes manifold calibration into parametric and nonparametric forms and advances both in the context of SAR sounders. A parametric estimator of nonlinear manifold parameters that are common across disjoint sets is derived. The algorithm framework capitalizes on a snapshot database to aggregate many angularly diverse observations in estimating unknown model parameters. The technique, which handles multitarget calibration, is desirable in the SAR sounder problem but requires a parametric model of the angle-dependent manifold. Nonparametric calibration techniques characterize the array response over the field of view but require many observations of single sources over dense calibration grids. A subspace clustering technique is proposed to identify snapshots with a single dominant source, thereby enabling a principal components-based characterization of the sounder manifold. The measured manifold leads to significant performance improvements over the traditional array response model in tomography. These results indicate that manifold calibration will reduce uncertainty in sounder-derived maps of the subsurface, leading to more accurate estimates of total fresh ice volume.


Shravan Kaundinya

Investigative Development of an UWB radar for UAS-borne applications

When & Where:


Nichols Hall, Room 317

Committee Members:

Carl Leuschen, Chair
Christopher Allen
Fernando Rodriguez-Morales
Emily Arnold

Abstract

Over the last few years, one of the primary focuses in engineering development has been system packaging and miniaturization. This is apparent in various areas such as the rise of Internet of Things (IoT), CubeSats, and Unmanned Aerial Systems (UAS). The simultaneous miniaturization in multiple industries has enabled advancements in remote sensing instrument development. Sensors such as radars, lidars, and cameras are used on UAS to characterize various aspects of the Earth System like ice, soil, and vegetation, thereby improving our understanding. In this work, an Ultra-wideband (UWB) radar system design for the Vapor 55 UAS rotorcraft is investigated. A compact, lightweight 2 – 18 GHz Frequency Modulated Continuous Wave (FMCW) radar with two channels on transmit and receive is designed to characterize extended targets like soil and snow. This thesis reports initial proof-of-concept field measurements performed with soil as the target to identify backscatter signatures that are indicative of moisture content. The thesis also describes the exploratory design, development, and laboratory test results of the miniaturized radar electronics and compact antenna front-end.