Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Past Defense Notices

Dates

ASHWINI BALACHANDRA

Implementation of Truncated Lévy Walk Mobility Model in ns-3

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li


Abstract

Mobility models generate the mobility patterns of the nodes in a given system. Mobility models help us to analyze and study the characteristic of new and existing systems. Various mobility models implemented in network simulation tools like ns-3 does not model the patterns of human mobility. The main idea of this project is to implement the truncated Lévy walk mobility model in ns-3. The model has two variations, in the first variation, the flight length and pause time of the nodes are determined from the truncated Pareto distribution and in the second variation, Lévy distribution models the flight length and pause time distributions and the values are obtained by Lévy α-stable random number generator. The mobility patterns of the nodes are generated and analyzed for the model by changing various model attributes. Further studies can be done to understand the behavior of these models for different ad hoc networking protocols.


PAVAN KUMAR MOTURU

Image Processing Techniques in Matlab GUI

When & Where:


246 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Chris Allen
Fernando Rodriguez-Morales


Abstract

Identifying missing bed in radar data is very important in sea level changes. Increase in sea level is a problem of global importance because of its impact on infrastructure. Ice sheets in the Greenland and Antarctic are melting and increasing their contribution to sea level change over the last decade. Measuring ice sheets thickness is required to estimate sea level rise. We need to use several algorithms, pre-defined functions to extract the weak bed echoes, but we don’t have a tool in Matlab which contains some important algorithms like ImageJ. We can’t process all the data in ImageJ as Matlab produces better results compared to ImageJ as some of the functions like window and symmetric selection around center in FFT domain are not implemented in ImageJ. 
In this project, we will investigate the application of some image processing techniques using a GUI developed for analyzing ice sounding radargrams. One key advantage of the tool is that the image processing techniques are applied in a single GUI instead of doing separately. We apply these techniques on the data which came after applying extensive signal processing techniques. After performing these techniques, we compare the processed data with the original data. 


ASHWINI BALACHANDRA

Implementation of Truncated Lévy Walk Mobility Model in ns-3

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li


Abstract

Mobility models generate the mobility patterns of the nodes in a given system. Mobility models help us to analyze and study the characteristic of new and existing systems. Various mobility models implemented in network simulation tools like ns-3 does not model the patterns of human mobility. The main idea of this project is to implement the truncated Lévy walk mobility model in ns-3. The model has two variations, in the first variation, the flight length and pause time of the nodes are determined from the truncated Pareto distribution and in the second variation, Lévy distribution models the flight length and pause time distributions and the values are obtained by Lévy α-stable random number generator. The mobility patterns of the nodes are generated and analyzed for the model by changing various model attributes. Further studies can be done to understand the behavior of these models for different ad hoc networking protocols. 

 

 


MOHSEN ALEENEJAD

New Modulation Methods and Control Strategies for Power Converters

When & Where:


1 Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang

Abstract

The DC to AC power Inverters (so-called Inverters) are widely used in industrial applications. The multilevel Inverters are becoming increasingly popular in industrial apparatus aimed at medium to high power conversion applications. In comparison to the conventional inverters, they feature superior characteristics such as lower total harmonic distortion (THD), higher efficiency, and lower switching voltage stress{Malinowski, 2010 #9}{Malinowski, 2010 #9}. Nevertheless, the superior characteristics come at the price of a more complex topology with an increased number of power electronic switches. As a general rule in a Inverter topology, as the number of power electronic switches increases, the chances of fault occurrence on of the switches increases, and thus the Inverter’s reliability decreases. Due to the extreme monetary ramifications of the interruption of operation in commercial and industrial applications, high reliability for power Inverters utilized in these sectors is critical. As a result, developing fault-tolerant operation schemes for multilevel Inverters has always been an interesting topic for researchers in related areas. The purpose of this proposal is to develop new control and fault-tolerant strategies for the multilevel power Inverter. In the event of a fault, the line voltages of the faulty Inverters are unbalanced and cannot be applied to the three phase loads. This fault-tolerant strategy generates balanced line voltages without bypassing any healthy and operative Inverter element, makes better use of the Inverter capacity and generates higher output voltage. This strategy exploits the advantages of the Selective Harmonic Elimination (SHE) method in conjunction with a slightly modified Fundamental Phase Shift Compensation technique to generate balanced voltages and manipulate voltage harmonics at the same time. However, due to the distinctive requirement of the strategy to manipulate both amplitude and angle of the harmonics, the conventional SHE technique is not the suitable basis for the proposed strategy. Therefore, in this project a modified Unbalanced SHE technique which can be used as the basis for the fault-tolerant strategy is developed. The proposed strategy is applicable to several classes of multilevel Inverters with three or more voltage levels. 


MOHSEN ALEENEJAD

New Modulation Methods and Control Strategies for Power Converters

When & Where:


1 Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang

Abstract

The DC to AC power Inverters (so-called Inverters) are widely used in industrial applications. The multilevel Inverters are becoming increasingly popular in industrial apparatus aimed at medium to high power conversion applications. In comparison to the conventional inverters, they feature superior characteristics such as lower total harmonic distortion (THD), higher efficiency, and lower switching voltage stress{Malinowski, 2010 #9}{Malinowski, 2010 #9}. Nevertheless, the superior characteristics come at the price of a more complex topology with an increased number of power electronic switches. As a general rule in a Inverter topology, as the number of power electronic switches increases, the chances of fault occurrence on of the switches increases, and thus the Inverter’s reliability decreases. Due to the extreme monetary ramifications of the interruption of operation in commercial and industrial applications, high reliability for power Inverters utilized in these sectors is critical. As a result, developing fault-tolerant operation schemes for multilevel Inverters has always been an interesting topic for researchers in related areas. The purpose of this proposal is to develop new control and fault-tolerant strategies for the multilevel power Inverter. In the event of a fault, the line voltages of the faulty Inverters are unbalanced and cannot be applied to the three phase loads. This fault-tolerant strategy generates balanced line voltages without bypassing any healthy and operative Inverter element, makes better use of the Inverter capacity and generates higher output voltage. This strategy exploits the advantages of the Selective Harmonic Elimination (SHE) method in conjunction with a slightly modified Fundamental Phase Shift Compensation technique to generate balanced voltages and manipulate voltage harmonics at the same time. However, due to the distinctive requirement of the strategy to manipulate both amplitude and angle of the harmonics, the conventional SHE technique is not the suitable basis for the proposed strategy. Therefore, in this project a modified Unbalanced SHE technique which can be used as the basis for the fault-tolerant strategy is developed. The proposed strategy is applicable to several classes of multilevel Inverters with three or more voltage levels.


SIVA RAM DATTA BOBBA

Rule Induction For Numerical Data using PRISM

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
James Miller


Abstract

Rule induction is one of the basic and important techniques of data mining. Inducing a rule set for symbolic data is simple and straightforward, but it becomes complex when the attributes are numerical. There are several algorithms available that do the task of rule induction for symbolic data. One such algorithm is PRISM which uses conditional probability for attribute-value selection to induce a rule. 
In the real world scenario, data may comprise of either symbolic or numerical attributes. It becomes difficult to induce a discriminant ruleset on the data with numerical attributes. This project provides an implementation of PRISM to handle numerical data. First, it takes as input, a dataset with numerical attributes and converts them into discrete values using the multiple scanning approach which identifies the cut-points for intervals using minimum conditional entropy. Once discretization completes, PRISM uses these discrete values to induce ruleset for each decision. Thus, this project helps to induce modular rulesets over a numerical dataset. 

 

 


NILISHA MANE

Tools to Explore Run-time Program Properties

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Gary Minden


Abstract

The advancement in the field of embedded technology has resulted in its extensive use in almost all the modern electronic devices. Hence, unlike in the past, there is a very crucial need to develop system security tools for these devices. So far most of the research has been concentrated either on security for general computer systems or on static analysis of embedded systems. In this project, we develop tools that explore and monitor the run-time properties of programs/applications as well as the inter-process communication. We also present a case studies in which these tools are be used on a Gumstix (an embedded system) running Poky Linux system to monitor a particular program as well as print out a graph of all inter-process communication on the system.


BRIAN MACHARIA

UWB Microwave Filters on Multilayer LCP Substrates: A Feasibility Study

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Fernando Rodriguez-Morales
Chris Allen


Abstract

Having stable dielectric properties extending to frequencies over 110 GHz, Liquid Crystal Polymer (LCP) materials are a new and promising substrate alternative for low-cost production of planar microwave circuits. This project focused on the design of several microwave filter structures using multiple layers for operation in the 2-18 GHz and 10-14 GHz bands. Circuits were simulated and optimized using EDA tools, obtaining good results over the bands of interest. The results show that it is feasible to fabricate these structures on dielectric substrates compatible with off-site manufacturing facilities. It is likewise shown that LCP technology can yield a 3-5x area reduction as compared to cavity-type filters, making them much easier to integrate in a planar circuit.


Md. MOSHFEQUR RAHMAN

OpenFlow based Multipath Communication for Resilience

When & Where:


246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li


Abstract

A cross-layer framework in the Software Defined Networking domain is pro- posed to study the resilience in OpenFlow-based multipath communication. A testbed has been built, using Brocade OpenFlow switches and Dell Poweredge servers. The framework is evaluated against regional challenges. By using differ- ent adjacency matrices, various topologies are built. The behavior of OpenFlow multipath-based communication is studied in case of a single path failure, splitting of traffic and also with multipath TCP enabled traffic. The behavior of different coupled congestion algorithms for MPTCP is also studied. A Web framework is presented to demonstrate the OpenFlow experiment by importing the network topologies and then executing and analyzing user defined regional attacks.


RAGAPRABHA CHINNASWAMY

A Comparison of Maximal Consistent Blocks and Characteristics Sets for Incomplete Data Sets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

One of the main applications of rough set theory is rule induction. If the input data set contains inconsistencies, using rough set theory leads to inducing certain and possible rule sets. 
In this project, the concept of a maximal consistent block is applied to formulate a new approximation to a concept in the incomplete data set with a higher level of accuracy. This method does not require change in the size of the original incomplete data set. Two interpretations of missing attribute values are discussed: lost values and “do not care” conditions. The main objective is to compare maximal consistent blocks and characteristics sets in terms of cardinality of lower and upper approximations. Four incomplete data sets are used for experiments with varying levels of missing information. The next objective is to compare the decision rules induced and cases covered by both techniques. The experiments show that the both techniques provide the same lower approximations for all the datasets with “do not care” conditions. The best results are achieved by maximal consistent blocks for upper approximations for three datasets and there is a tie for the other data set.