Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Past Defense Notices

Dates

SRAVYA ATHINARAPU

Model Order Estimation and Array Calibration for Synthetic Aperture Radar Tomography

When & Where:


317 Nichols Hall

Committee Members:

Jim Stiles, Chair
John Paden, Co-Chair
Shannon Blunt


Abstract

The performance of several methods to estimate the number of source signals impinging on a sensor array are compared using a traditional simulator and their performance for synthetic aperture radar tomography is discussed as it is useful in the fields of radar and remote sensing when multichannel arrays are employed. All methods use the sum of the likelihood function with a penalty term. We consider two signal models for model selection and refer to these as suboptimal and optimal. The suboptimal model uses a simplified signal model and the model selection and direction of arrival estimation are done in separate steps. The optimal model uses the actual signal model and the model selection and direction of arrival estimation are done in the same step. In the literature, suboptimal model selection is used because of computational efficiency, but in our radar post processing we are less time constrained and we implement the optimal model for the estimation and compare the performance results. Interestingly we find several methods discussed in the literature do not work using optimal model selection, but can work if the optimal model selection is normalized. We also formulate a new penalty term, numerically tuned so that it gives optimal performance over a particular set of operating conditions, and compare this method as well. The primary contribution of this work is the development of an optimizer that finds a numerically tuned penalty term that outperforms current methods and discussion of the normalization techniques applied to optimal model selection. Simulation results show that the numerically tuned model selection criteria is optimal and that the typical methods do not do well for low snapshots which are common in radar and remote sensing applications. We apply the algorithms to data collected by the CReSIS radar depth sounder and discuss the results.

In addition to model order estimation, array model errors should be estimated to improve direction of arrival estimation. The implementation of a parametric-model is discussed for array calibration that estimates the first and second order array model errors. Simulation results for the gain, phase and location errors are discussed.


PRANJALI PARE

Development of a PCB with Amplifier and Discriminator for the Timing Detector in CMS-PPS

When & Where:


2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Christophe Royon, Co-Chair
Ron Hui
Carl Leuschen

Abstract

The Compact Muon Solenoid - Precision Proton Spectrometer (CMS-PPS) detector at the Large Hadron Collider (LHC) operates at high luminosity and is designed to measure forward scattered protons resulting from proton-proton interactions involving photon and Pomeron exchange processes. The PPS uses tracking and timing detectors for these measurements. The timing detectors measure the arrival time of the protons on each side of the interaction and their difference is used to reconstruct the vertex of the interaction. A good time precision (~10ps) on the arrival time is desired to have a good precision (~2mm) on the vertex position. The time precision is approximately equal to the ratio of the Root Mean Square (RMS) noise to the slew rate of the signal obtained from the detector.

Components of the timing detector include Ultra-Fast Silicon Detector (UFSD) sensors that generate a current pulse, transimpedance amplifier with shaping, and a discriminator. This thesis discusses the circuit schematic and simulations of an amplifier designed to have a time precision and the choice and simulation of discriminators with Low Voltage Differential Signal (LVDS) outputs. Additionally, details on the Printed Circuit Board (PCB) design including arrangement of components, traces, and stackup have been discussed for a 6-layer PCB that houses these three components. The PCB board has been manufactured and test results were performed to assess the functionality.

 


AMIR MODARRESI

Network Resilience Architecture and Analysis for Smart Cities

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li
Bo Luo
Cetinkaya Egemen

Abstract

The Internet of Things (IoT) is evolving rapidly to every aspect of human life including healthcare, homes, cities, and driverless vehicles that makes humans more dependent on the Internet and related infrastructure. While many researchers have studied the structure of the Internet that is resilient as a whole, new studies are required to investigate the resilience of the edge networks in which people and “things” connect to the Internet. Since the range of service requirements varies at the edge of the network, a wide variety of protocols are needed. In this research proposal, we survey standard protocols and IoT models. Next, we propose an abstract model for smart homes and cities to illustrate the heterogeneity and complexity of network structure. Our initial results show that the heterogeneity of the protocols has a direct effect on the IoT and smart city resilience. As the next step, we make a graph model from the proposed model and do graph theoretic analysis to recognize the fundamental behavior of the network to improve its robustness. We perform the process of improvement through modifying topology, adding extra nodes, and links when necessary. Finally, we will conduct various simulation studies on the model to validate its resilience.


VENKAT VADDULA

Content Analysis in Microblogging Communities

When & Where:


2001B Eaton Hall

Committee Members:

Nicole Beckage, Chair
Jerzy Grzymala-Busse
Bo Luo


Abstract

People use online social networks like Twitter to communicate and discuss a variety of topics. This makes these social platforms an import source of information. In the case of Twitter, to make sense of this source of information, understanding the content of tweets is important in understanding what is being discussed on these social platforms and how ideas and opinions of a group are coalescing around certain themes. Although there are many algorithms to classify(identify) the topics, the restricted length of the tweets and usage of jargon, abbreviations and urls make it hard to perform without immense expertise in natural language processing. To address the need for content analysis in twitter that is easily implementable, we introduce two measures based on the term frequency to identify the topics in the twitter microblogging environment. We apply these measures to the tweets with hashtags related to the Pulse night club shooting in Orlando that happened on June 12, 2016. This event is branded as both terrorist attack and hate crime and different people on twitter tweeted about this event differently forming social network communities, making this a fitting domain to explore our algorithms ability to detect the topics of community discussions on twitter.  Using community detection algorithms, we discover communities in twitter. We then use frequency statistics and Monte Carlo simulation to determine the significance of certain hashtags. We show that this approach is capable of uncovering differences in community discussions and propose this method as a means to do community based content detection.


TEJASWINI JAGARLAMUDI

Community-based Content Analysis of the Pulse Night Club Shooting

When & Where:


2001B Eaton Hall

Committee Members:

Nicole Beckage, Chair
Prasad Kulkarni
Fengjun Li


Abstract

On June 12, 2016, 49 people were killed and another 58 wounded in an attack at Pulse Nightclub in Orlando Florida. This incident was regarded as both hate crime against LGBT people and as a terrorist attack. This project focuses on analyzing tweets a week after the terrorist attack, specifically looking at how different communities within twitter were discussing this event. To understand how the twitter users in different communities are discussing this event, a set of hashtag frequency-based evaluation measures and simulations are proposed. The simulations are used to assess the specific hashtag content of a community. Using community detection algorithms and text analysis tools, significant topics that specific communities are discussing and  the topics that are being avoided by those communities are discovered.


NIHARIKA GANDHARI

A Comparative Study on Strategies of Rule Induction for Incomplete Data

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Perry Alexander
Bo Luo


Abstract

Rule Induction is one of the major applications of rough set theory. However, traditional rough set models cannot deal with incomplete data sets. Missing values can be handled by data pre-processing or extension of rough set models. Two data pre-processing methods and one extension of the rough set model are considered in this project. These being filling in missing values with most common data, ignoring objects by deleting records and extended discernibility matrix. The objective is to compare these methods in terms of stability and effectiveness. All three methods have same rule induction method and are analyzed based on test accuracy and missing attribute level percentage. To better understand the properties of these approaches, eight real-life data-sets with varying level of missing attribute values are used for testing. Based on the results, we discuss the relative merits of three approaches in an attempt to decide upon optimal approach. The final conclusion is that the best method is to use a pre-processing method which is filling in missing values with most common data.​


MADHU CHEGONDI

A Comparison of Leaving-one-out and Bootstrap

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang


Abstract

Recently machine learning has created significant advancements in many areas like health, finance, education, sports, etc. which has encouraged the development of many predictive models. In machine learning, we extract hidden, previously unknown, and potentially useful high-level information from low-level data. Cross-validation is a typical strategy for estimating the performance. It simulates the process of fitting to different datasets and seeing how different predictions can be. In this project, we review accuracy estimation methods and compare two resampling methods, such as leaving-one-out and bootstrap. We compared these validation methods using LEM1 rule induction algorithm. Our results indicate that for real-world datasets similar to ours, bootstrap may be optimistic.


PATRICK McCORMICK

Design and Optimization of Physical Waveform-Diverse and Spatially-Diverse Emissions

When & Where:


129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Chris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*

Abstract

With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.

In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.

The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform


MATT KITCHEN

Blood Phantom Concentration Measurement Using An I-Q Receiver Design

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Jim Stiles


Abstract

Near-infrared spectroscopy has been used as a non-invasive method of determining concentration of chemicals within living tissues of living organisms.  This method employs LEDs of specific frequencies to measure concentration of blood constituents according to the Beer-Lambert Law.  One group of instruments (frequency domain instruments) is based on amplitude modulation of the laser diode or LED light intensity, the measurement of light adsorption and the measurement of modulation phase shift to determine light path length for use in Beer-Lambert Law. This paper describes the design and demonstration of a frequency domain instrument for measuring concentration of oxygenated and de-oxygenated hemoglobin using incoherent optics and an in-phase quadrature (I-Q) receiver design.  The design has been shown to be capable of resolving variations of concentration of test samples and a viable prototype for future, more precise, tools.

 


LIANYU LI

Wireless Power Transfer

When & Where:


250 Nichols Hall

Committee Members:

Alessandro Salandrino, Chair
Reza Ahmadi
Ron Hui


Abstract

Wireless Power Transfer is commonly known as that electrical energy transfer from source to load in some certain distance without any wire connecting in between. It has been almost two hundred when people first noticed the electromagnetic induction phenomenon. After that, Nikola Tesla tried to use this concept to build the first wireless power transfer device. Today, the most common technic is used for transfer power wirelessly is known as inductive coupling. It has revolutionized the transmission of power in various application.  Wireless power transfer is one of the simplest and inexpensive way to transfer energy, and it will change the behavior of how people are going to use their devices.

With the development of science and technology. A new method of wireless power transfer through the coupled magnetic resonances could be the next technology that bring the future nearer. It significantly increases the transmission distance and efficiency. This project shows that this is very simple way to charge the low power devices wirelessly by using coupled magnetic resonances. It also presents how easy to set up the system compare to the conventional copper cables and current carrying wire.