Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Past Defense Notices

Dates

Sairath Bhattacharjya

A Novel Zero-Trust Framework to Secure IoT Communications

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Hossein Saiedian, Chair
Alex Bardas
Fengjun Li


Abstract

The phenomenal growth of the Internet of Things (IoT) has highlighted the security and privacy concerns associated with these devices. The research literature on the security architectures of IoT makes evident that we need to define and formalize a framework to secure the communications among these devices. To do so, it is important to focus on a zero-trust framework that will work on the principle premise of "trust no one, verify everyone" for every request and response.

In this thesis, we emphasize the immediate need for such a framework and propose a zero-trust communication model for IoT that addresses security and privacy concerns. We employ the existing cryptographic techniques to implement the framework so that it can be easily integrated into the current network infrastructures. The framework provides an end-to-end security framework for users and devices to communicate with each other privately. It is often stated that it is difficult to implement high-end encryption algorithm within the limited resource of an IoT device. For our work, we built a temperature and humidity sensor using NodeMCU V3 and were able to implement the framework and successfully evaluate and document its efficient operation. We defined four areas for evaluation and validation, namely, security of communications, memory utilization of the device, response time of operations, and cost of its implementation. For every aspect we defined a threshold to evaluate and validate our findings. The results are satisfactory and are documented. Our framework provides an easy-to-use solution where the security infrastructure acts as a backbone for every communication to and from the IoT devices.


Royce Bohnert

Experiments with mmWave Radar

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Christopher Allen, Chair
Erik Perrins
James Stiles


Abstract

The IWR6843 mmWave radar device from Texas Instruments (TI) is a complete FMCW radar system-on-chip operating in the 60 to 64 GHz frequency range. The IWR6843ISK is an evaluation platform which includes the IWR6843 connected to patch antennas on a PCB. In this project, the viability of using the IWR6843 sensor for short-range detection of small, high-velocity targets is investigated. Some of the limitations of the device are explored and a specific radar configuration is proposed. To confirm the applicability of the proposed configuration, a similar configuration is used with the IWR6843ISK-ODS evaluation platform to observe the launch of a foil-wrapped dart. The evaluation platform is used to collect raw data, which is then post-processed in a Python program to generate a range-doppler heatmap visualization of the data.


Matthew Taylor

Defending Against Typosquatting Attacks In Programming Language-Based Package Repositories

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Drew Davidson, Chair
Alex Bardas
Bo Luo


Abstract

Program size and complexity have dramatically increased over time.  To reduce their work-load, developers began to utilize package managers.  These packages managers allow third-party functionality, contained in units called packages, to be quickly imported into a project.  Due to their utility, packages have become remarkably popular. The largest package repository, npm, has more than 1.2 million publicly available packages and serves more than 80 billion package downloads per month.  In recent years,  this popularity has attracted the attention of malicious users. Attackers have the ability to upload packages which contain malware. To increase the number of victims, attackers regularly leverage a tactic called typosquatting, which involves giving the malicious package a name that is very similar to the name of a popular package.  Users who make a typo when trying to install the popular package fall victim to the attack and are instead served the malicious payload. The consequences of typosquatting attacks can be catastrophic. Historical typosquatting attacks have exported passwords, stolen cryptocurrency, and opened reverse shells.This thesis focuses on typosquatting attacks in package repositories.  It explores the extent to which typosquatting exists in npm and PyPI (the de facto standard package repositories for Node.js and Python, respectively), proposes a practical defense against typosquatting attacks, and quantifies the efficacy of the proposed defense.  The presented solution incurs an acceptable temporal overhead of 2.5% on the standard package installation process and is expected to affect approximately 0.5% of all weekly package downloads. Furthermore, it has been used to discover a particularly high-profile typosquatting perpetrator, which was then reported and has since been deprecated by npm.  Typosquatting is an important yet preventable problem.  This thesis recommends pack-ages creators to protect their own packages with a technique called defensive typosquatting and repository maintainers to protect all users through augmentations to their package managers or automated monitoring of the package namespace.


Jacob Fustos

​​Attacks and Defenses against Speculative Execution Based Side Channels

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Heechul Yun, Chair
Alex Bardas
Drew Davidson


Abstract

Modern high-performance processors utilize techniques such as speculation and out-of-order execution to improve performance. Unfortunately, the recent Spectre and Meltdown exploits take advantage of these techniques to circumvent the security of the system. As speculation and out-of-order execution are complex features meant to enhance performance, full mitigation of these exploits often incurs high overhead and partial defenses need careful considerations to ensure attack surface is not left vulnerable.  In this work, we explore these attacks deeper,  both how they are executed and how to defend against them.   

 

We first propose a novel micro-architectural extension, SpectreGuard, that takes a data-centric approach to the problem. SpectreGuard attempts to reduce the performance penalty that is common with Spectre defenses by allowing software and hardware to work together. This collaborative approach allows software to tag secrets at the page granularity, then the underlying hardware can optimize secret data for security, while optimizing all other data for performance. Our research shows that such a combined approach allows for the creation of processors that can both achieve a high level of security while maintaining high performance.

 

We then propose SpectreRewind, a novel strategy for executing speculative execution attacks. SpectreRewind reverses the flow of traditional speculative execution attacks, creating new covert channels that transmit secret data to instructions that appear to execute logically before the attack even takes place. We find this attack vector can bypass some state-of-the-art proposed hardware defenses, as well as increase attack surface for certain Meltdown-type attacks on existing machines. Our research into this area helps towards completing the understanding of speculative execution attacks so that defenses can be designed with the knowledge of all attack vectors.


Venkata Siva Pavan Kumar Nelakurthi

Venkata Siva Pavan Kumar Nelakurthi

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Guanghui Wang


Abstract

In data mining, rule induction is a process of extracting formal rules from decision

tables, where the later are the tabulated observations, which typically consist of few

attributes, i.e., independent variables and a decision, i.e., a dependent variable. Each

tuple in the table is considered as a case, and there could be n number of cases for a

table specifying each observation. The efficiency of the rule induction depends on how

many cases are successfully characterized by the generated set of rules, i.e., ruleset.

There are different rule induction algorithms, such as LEM1, LEM2, MLEM2. In the real

world, datasets will be imperfect, inconsistent, and incomplete. MLEM2 is an efficient

algorithm to deal with such sorts of data, but the quality of rule induction largely

depends on the chosen classification strategy. We tried to compare the 16 classification

strategies of rule induction using MLEM2 on incomplete data. For this, we

implemented MLEM2 for inducing rulesets based on the selection of the type of

approximation, i.e., singleton, subset or concept, and the value of alpha for calculating

probabilistic approximations. A program called rule checker is used to calculate the

error rate based on the classification strategy specified. To reduce the anomalies, we

used ten-fold cross-validation to measure the error rate for each classification. Error

rates for the above strategies are being calculated for different datasets, compared, and

presented.​


Charles Mohr

Design and Evaluation of Stochastic Processes as Physical Radar Waveforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Recent advances in waveform generation and in computational power have enabled the design and implementation of new complex radar waveforms. Still, even with these advances in computation, in a pulse agile mode, where the radar transmits unique waveforms at every pulse, the requirement to design physically robust waveforms which achieve good autocorrelation sidelobes, are spectrally contained, and have a constant amplitude envelope for high power operation, can require expensive computation equipment and can impede real time operation. This work addresses this concern in the context of FM noise waveforms which have been demonstrated in recent years in both simulation and in experiments to achieve low autocorrelation sidelobes through the high dimensionality of coherent integration when operating in a pulse agile mode. However while they are effective, the approaches to design these waveforms requires the optimization of each individual waveform making them subject to the concern above.

This dissertation takes a different approach. Since these FM noise waveforms are meant to be noise like in the first place, the waveforms here are instantiated as the sample functions of a stochastic process which has been specially designed to produce spectrally contained, constant amplitude waveforms with noise like cancellation of sidelobes. This makes the waveform creation process little more computationally expensive than pulling numbers from a random number generator (RNG) since the optimization designs a waveform generating function (WGF) itself rather than each waveform themselves. This goal is achieved by leveraging gradient descent optimization methods to reduce the expected frequency template error (EFTE) cost function for both the pulsed stochastic waveform generation (StoWGe) waveform model and a new CW version of StoWGe denoted CW-StoWGe. The effectiveness of these approaches and their ability to generate useful radar waveforms is analyzed using several stochastic waveform generation metrics developed here. The EFTE optimization is shown through simulation to produce WGFs which generate FM noise waveforms in both pulsed and CW modes which achieve good spectral containment and autocorrelation sidelobes. The resulting waveforms will be demonstrated in both loopback and in open-air experiments to be robust to physical implementation.


Michael Stees

Optimization-based Methods in High-Order Mesh Generation and Untangling

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Suzanne Shontz, Chair
Perry Alexander
Prasad Kulkarni
Jim Miller
Weizhang Huang

Abstract

High-order numerical methods for solving PDEs have the potential to deliver higher solution accuracy at a lower cost than their low-order counterparts.  To fully leverage these high-order computational methods, they must be paired with a discretization of the domain that accurately captures key geometric features.  In the presence of curved boundaries, this requires a high-order curvilinear mesh.  Consequently, there is a lot of interest in high-order mesh generation methods.  The majority of such methods warp a high-order straight-sided mesh through the following three step process.  First, they add additional nodes to a low-order mesh to create a high-order straight-sided mesh.  Second, they move the newly added boundary nodes onto the curved domain (i.e., apply a boundary deformation).  Finally, they compute the new locations of the interior nodes based on the boundary deformation.  We have developed a mesh warping framework based on optimal weighted combinations of nodal positions.  Within our framework, we develop methods for optimal affine and convex combinations of nodal positions, respectively.  We demonstrate the effectiveness of the methods within our framework on a variety of high-order mesh generation examples in two and three dimensions.  As with many other methods in this area, the methods within our framework do not guarantee the generation of a valid mesh.  To address this issue, we have also developed two high-order mesh untangling methods.  These optimization-based untangling methods formulate unconstrained optimization problems for which the objective functions are based on the unsigned and signed angles of the curvilinear elements.  We demonstrate the results of our untangling methods on a variety of two-dimensional triangular meshes.


Farzad Farshchi

Deterministic Memory Systems for Real-time Multicore Processors

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Heechul Yun, Chair
Esam Eldin Mohamed Aly
Prasad Kulkarni
Rodolfo Pellizzoni
Shawn Keshmiri

Abstract

With the emergence of autonomous systems such as self-driving cars and drones, the need for high-performance real-time embedded systems is increasing. On the other hand, the physics of the autonomous systems constraints size, weight, and power consumption (known as SWaP constraints) of the embedded systems. A solution to satisfy the need for high performance while meeting the SWaP constraints is to incorporate multicore processors in real-time embedded systems. However, unlike unicore processors, in multicore processors, the memory system is shared between the cores. As a result, the memory system performance varies widely due to inter-core memory interference. This can lead to over-estimating the worst-case execution time (WCET) of the real-time tasks running on these processors, and therefore, under-utilizing the computation resources. In fact, recent studies have shown that real-time tasks can be slowed down more than 300 times due to inter-core memory interference.

In this work, we propose novel software and hardware extensions to multicore processors to bound the inter-core memory interference in order to reduce the pessimism of WCET and to improve time predictability. We introduce a novel memory abstraction, which we call Deterministic Memory, that cuts across various layers of the system: the application, OS, and hardware. The key characteristic of Deterministic Memory is that the platform—the OS and hardware—guarantees small and tightly bounded worst-case memory access timing.  Additionally, we propose a drop-in hardware IP that enables bounding the memory interference by per-core regulation of the memory access bandwidth at fine-grained time intervals. This new IP, which we call the Bandwidth Regulation Unit (BRU), does not require significant changes to the processor microarchitecture and can be seamlessly integrated with the existing microprocessors. Moreover, BRU has the ability to regulate the memory access bandwidth of multiple cores collectively to improve bandwidth utilization. As for future work, we plan to further improve bandwidth utilization by extending BRU to recognize memory requests accessing different levels of the memory hierarchy (e.g. LLC and DRAM). We propose to fully evaluate these extensions on open-source software and hardware and measure their effectiveness with realistic case studies.


Waqar Ali

Deterministic Scheduling of Real-Time Tasks on Heterogeneous Multicore Platforms

When & Where:


https://zoom.us/j/484640842?pwd=TDAyekxtRDVaTHF0K1NlbU5wNFVtUT09 - The password for the meeting is 005158.

Committee Members:

Heechul Yun, Chair
Esam Eldin Mohamed Aly
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri

Abstract

Scheduling of real-time tasks involves analytically determining whether each task in a group of periodic tasks can finish before its deadline. This problem is well understood for unicore platforms and there are exact schedulability tests which can be used for this purpose. However, in multicore platforms, sharing of hardware resources between simultaneously executing real-time tasks creates non-deterministic coupling between them based on their requirement of the shared hardware resource(s) which significantly complicates the schedulability analysis. The standard practice is to over-estimate the worst-case execution time (WCET) of the real-time tasks, by a constant factor (e.g, 2x), when determining schedulability on these platforms. Although widely used, this practice has two serious flaws. Firstly, it can make the schedulability analysis overly pessimistic because all tasks do not interfere with each other equally. Secondly, recent findings have shown that for tasks that do get affected by shared resource interference, they can experience extreme (e.g., >300X) WCET increases on commercial-of-the-shelf (COTS) multicore platforms, in which case, a schedulability analysis incorporating a blanket interference factor of 2x for every task cannot give accurate results. Apart from the problem of WCET estimation, the established schedulability analyses for multicore platforms are inherently pessimistic due to the effect of carry-in jobs from high priority tasks. Finally, the increasing integration of hardware accelerators (e.g., GPU) on SoCs complicates the problem further because of the nuances of scheduling on these devices which is different from traditional CPU scheduling.

 

We propose a novel approach towards scheduling of real-time tasks on heterogeneous multicore platforms with the aim of increased determinism and utilization in the online execution of real-time tasks and decreased pessimism in the offline schedulability analysis. Under this framework, we propose to statically group different real-time tasks into a single scheduling entity called a virtual-gang. Once formed, these virtual-gangs are to be executed one-at-a-time with strict regulation on interference from other sources with the help of state-of-the-art techniques for performance isolation in multicore platforms. Using this idea, we can achieve three goals. Firstly, we can limit the effect of shared resource interference which can exist only between tasks that are part of the same virtual-gang. Secondly, due to one-gang-at-a-time policy, we can transform the complex problem of scheduling real-time tasks on multicore platforms into simple and well-understood problem of scheduling these tasks on unicore platforms. Thirdly, we can demonstrate that it is easy to incorporate scheduling on integrated GPUs into our framework while preserving the determinism of the overall system. We show that the virtual-gang formation problem can be modeled as an optimization problem and present algorithms for solving it with different trade-offs. We propose to fully implement this framework in the open-source Linux kernel and evaluate it both analytically using generated tasksets and empirically with realistic case-studies.


Amir Modarresi

Network Resilience Architecture and Analysis for Smart Homes

When & Where:


https://kansas.zoom.us/j/228154773

Committee Members:

Victor Frost, Chair
Morteza Hashemi
Fengjun Li
Bo Luo
John Symons

Abstract

The Internet of Things (IoT) is evolving rapidly to every aspect of human life including, healthcare, homes, cities, and driverless vehicles that makes humans more dependent on the Internet and related infrastructure. While many researchers have studied the structure of the Internet that is resilient as a whole, new studies are required to investigate the resilience of the edge networks in which people and \things" connect to the Internet. Since the range of service requirements varies at the edge of the network, a wide variety of technologies with different topologies are involved. Though the heterogeneity of the technologies at the edge networks can improve the robustness through the diversity of mechanisms, other issues such as connectivity among the utilized technologies and cascade of failures would not have the same effect as a simple  network. Therefore, regardless of the size of networks at the edge, the structure of these networks is complicated and requires appropriate study.

In this dissertation, we propose an abstract model for smart homes, as part of one of the fast-growing networks at the edge, to illustrate the heterogeneity and complexity of the network structure. As the next step, we make two instances of the abstract smart home model and perform a graph-theoretic analysis to recognize the fundamental behavior of the network to improve its robustness. During the process, we introduce a formal multilayer graph model to highlight the structures, topologies, and connectivity of various technologies at the edge networks and their connections to the Internet core. Furthermore,  we propose another graph model, technology interdependence graph, to represent the connectivity of technologies. This representation shows the degree of connectivity among technologies and illustrates which technologies are more vulnerable to link and node failures.

Moreover, the dominant topologies at the edge change the node and link vulnerability, which can be used to apply worst-case scenario attacks. Restructuring of the network by adding new links associated with various protocols to maximize the robustness of a given network can have distinctive outcomes for different robustness metrics. However, typical centrality metrics usually fail to identify important nodes in multi-technology networks such as smart homes. We propose four new centrality metrics to improve the process of identifying important nodes in multi-technology networks and recognize vulnerable nodes. Finally, we study over 1000 different smart home  topologies to examine the resilience of the networks with typical and the proposed centrality metrics.