Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

As machine learning (ML), artificial intelligence (AI), and deep learning continue to advance, their applications become more diverse – one such application is synthetic aperture radar (SAR) automatic target recognition (ATR). These SAR ATR networks use different forms of deep learning such as convolutional neural networks (CNN) to classify targets in SAR imagery. An emerging research area of SAR is dual function radar communication (DFRC) which performs both radar and communications functions using a single co-designed modulation. The utilization of DFRC emissions for SAR imaging impacts image quality, thereby influencing SAR ATR network training. Here, using the Civilian Vehicle Data Dome dataset from the AFRL, SAR ATR networks are trained and evaluated with simulated data generated using Gaussian Minimum Shift Keying (GMSK) and Linear Frequency Modulation (LFM) waveforms. The networks are used to compare how the target classification accuracy of the ATR network differ between DFRC (i.e., GMSK) and baseline (i.e., LFM) emissions. Furthermore, as is common in pulse-agile transmission structures, an effect known as ’range sidelobe modulation’ is examined, along with its impact on SAR ATR. Finally, it is shown that SAR ATR network can be trained for GMSK emissions using existing LFM datasets via two types of data augmentation.


Past Defense Notices

Dates

Kamala Gajurel

A Fine-Grained Visual Attention Approach for Fingerspelling Recognition in the Wild

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Cuncong Zhong, Chair
Guanghui Wang
Taejoon Kim
Fengjun Li

Abstract

Fingerspelling in sign language has been the means of communicating technical terms and proper nouns when they do not have dedicated sign language gestures. The automatic recognition of fingerspelling can help resolve communication barriers when interacting with deaf people. The main challenges prevalent in automatic recognition tasks are the ambiguity in the gestures and strong articulation of the hands. The automatic recognition model should address high inter-class visual similarity and high intra-class variation in the gestures. Most of the existing research in fingerspelling recognition has focused on the dataset collected in a controlled environment. The recent collection of a large-scale annotated fingerspelling dataset in the wild, from social media and online platforms, captures the challenges in a real-world scenario. This study focuses on implementing a fine-grained visual attention approach using Transformer models to address the challenges existing in two fingerspelling recognition tasks: multiclass classification of static gestures and sequence-to-sequence prediction of continuous gestures. For a dataset with a single gesture in a controlled environment (multiclass classification), the Transformer decoder employs the textual description of gestures along with image features to achieve fine-grained attention. For the sequence-to-sequence prediction task in the wild dataset, fine-grained attention is attained by utilizing the change in motion of the video frames (optical flow) in sequential context-based attention along with a Transformer encoder model. The unsegmented continuous video dataset is jointly trained by balancing the Connectionist Temporal Classification (CTC) loss and maximum-entropy loss. The proposed methodologies outperform state-of-the-art performance in both datasets. In comparison to the previous work for static gestures in fingerspelling recognition, the proposed approach employs multimodal fine-grained visual categorization. The state-of-the-art model in sequence-to-sequence prediction employs an iterative zooming mechanism for fine-grained attention whereas the proposed method is able to capture better fine-grained attention in a single iteration.


Chuan Sun

Reconfigurability in Wireless Networks: Applications of Machine Learning for User Localization and Intelligent Environment

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link.

Committee Members:

Morteza Hashemi, Chair
David Johnson
Taejoon Kim


Abstract

With the rapid development of machine learning (ML) and deep learning (DL) methodologies, DL methods can be leveraged for wireless network reconfigurability and channel modeling. While deep learning-based methods have been applied in a few wireless network use cases, there is still much to be explored. In this project, we focus on the application of deep learning methods for two scenarios. In the first scenario, a user transmitter was moving randomly within a campus area, and at certain spots sending wireless signals that were received by multiple antennas. We construct an active deep learning architecture to predict user locations from received signals after dimensionality reduction, and analyze 4 traditional query strategies for active learning to improve the efficiency of utilizing labeled data. We propose a new location-based query strategy that considers both spatial density and model uncertainty when selecting samples to label. We show that the proposed query strategy outperforms all the existing strategies. In the second scenario, a reconfigurable intelligent surface (RIS) containing 4096 tunable cells reflects signals from a transmitter to users in an office for better performance. We use the training data of one user's received signals under different RIS configurations to learn the impact behavior of the RIS on the wireless channel. Based on the context and experience from the first scenario, we build a DL neural network that maps RIS configurations to received signal estimations. In the second phase, the loss function was customized towards our final evaluation formula to obtain the optimum configuration array for a user. We propose and build a customized DL pipeline that automatically learns the behavior of RIS on received signals, and generates the optimal RIS configuration array for each of the 50 test users.


Kailani Jones

Deploying Android Security Updates: an Extensive Study Involving Manufacturers, Carriers, and End Users

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Alex Bardas, Chair
Fengjun Li
Bo Luo


Abstract

Android's fragmented ecosystem makes the delivery of security updates and OS upgrades cumbersome and complex. While Google initiated various projects such as Android One, Project Treble, and Project Mainline to address this problem, and other involved entities (e.g., chipset vendors, manufacturers, carriers) continuously strive to improve their processes, it is still unclear how effective these efforts are on the delivery of updates to supported end-user devices. In this paper, we perform an extensive quantitative study (August 2015 to December 2019) to measure the Android security updates and OS upgrades rollout process. Our study leverages multiple data sources: the Android Open Source Project (AOSP), device manufacturers, and the top four U.S. carriers (AT\&T, Verizon, T-Mobile, and Sprint). Furthermore, we analyze an end-user dataset captured in 2019 (152M anonymized HTTP requests associated with 9.1M unique user identifiers) from a U.S.-based social network. Our findings include unique measurements that, due to the fragmented and inconsistent ecosystem, were previously challenging to perform. For example, manufacturers and carriers introduce a median latency of 24 days before rolling out security updates, with an additional median delay of 11 days before end devices update. We show that these values alter per carrier-manufacturer relationship, yet do not alter greatly based on a model's age. Our results also delve into the effectiveness of current Android projects. For instance, security updates for Treble devices are available on average 7 days faster than for non-Treble devices. While this constitutes an improvement, the security update delay for Treble devices still averages 19 days.

 


Ali Alshawish

A New Fault-Tolerant Topology and Operation Scheme for the High Voltage Stage in a Three-Phase Solid-State Transformer

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
Morteza Hashemi
Taejoon Kim
Alessandro Salandrino
Elaina Sutley

Abstract

Solid-state transformers (SSTs) are comprised of several cascaded power stages with different voltage levels. This leads to more challenges for operation and maintenance of the SSTs not only under critical conditions, but also during normal operation. However, one of the most important reliability concerns for the SSTs is related to high voltage side switch and grid faults. High voltage stress on the switches, together with the fact that most modern SST topologies incorporate large number of power switches in the high voltage side, contribute to a higher probability of a switch fault occurrence. The power electronic switches in the high voltage stage are under very high voltage stress, significantly higher than other SST stages. Therefore, the probability of the switch failures becomes more substantial in this stage. In this research, a new technique is proposed to improve the overall reliability of the SSTs by enhancing the reliability of the high voltage stage.

 

The proposed method restores the normal operation of the SST from the point of view of the load even though the input stage voltages are unbalanced due to the switch faults. On the other hand, high voltage grid faults that result in unbalanced operating conditions in the SST can also lead to dire consequences in regards to safety and reliability. The proposed method can also revamp the faulty operation to the pre-fault conditions in the case of grid faults. The proposed method integrates the quasi-z-source inverter topology into the SST topology for rebalancing the transformer voltages. Therefore, this work develops a new SST topology in conjunction with a fault-tolerant operation strategy that can fully restore operation of the proposed SST in the case of the two fault scenarios. The proposed fault-tolerant operation strategy rebalances the line-to-line voltages after a fault occurrence by modifying the phase angles between the phase voltages generated by the high voltage stage of the proposed SST. The boosting property of the quasi-z-source inverter topology circuitry is then used to increase the amplitude of the rebalanced line-to-line voltages to their pre-fault values. A modified modulation technique is proposed for modifying the phase angles and controlling the quasi-z-source inverter topology shoot-through duty ratio.


Usman Sajid

Effective uni-modal to multi-modal crowd estimation

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Taejoon Kim, Chair
Bo Luo
Fengjun Li
Cuncong Zhong
Guanghui Wang

Abstract

Crowd estimation is an integral part of crowd analysis. It plays an important role in event management of huge gatherings like Hajj, sporting, and musical events or political rallies. Automated crowd count can lead to better and effective management of such events and prevent any unwanted incident. Crowd estimation is an active research problem due to different challenges pertaining to large perspective, huge variance in scale and image resolution, severe occlusions and dense crowd-like cluttered background regions. Current approaches cannot handle huge crowd diversity well and thus perform poorly in cases ranging from extreme low to high crowd-density, thus, leading to crowd underestimation or overestimation. Also, manual crowd counting subjects to very slow and inaccurate results due to the complex issues as mentioned above. To address the major issues and challenges in the crowd counting domain, we separately investigate two different types of input data: uni-modal (Image) and multi-modal (Image and Audio).

 

In the uni-modal setting, we propose and analyze four novel end-to-end crowd counting networks, ranging from multi-scale fusion-based models to uniscale one-pass and two-pass multi-task models. The multi-scale networks also employ the attention mechanism to enhance the model efficacy. On the other hand, the uni-scale models are equipped with novel and simple-yet-effective patch re-scaling module (PRM) that functions identical but lightweight in comparison to the multi-scale approaches. Experimental evaluation demonstrates that the proposed networks outperform the state-of-the-art methods in majority cases on four different benchmark datasets with up to 12.6% improvement in terms of the RMSE evaluation metric. Better cross-dataset performance also validates the better generalization ability of our schemes. For the multimodal input, the effective feature-extraction (FE) and strong information fusion between two modalities remain a big challenge. Thus, the aim in the multimodal environment is to investigate different fusion techniques with improved FE mechanism for better crowd estimation. The multi-scale uni-modal attention networks are also proven to be more effective in other deep leaning domains, as applied successfully on seven different scene-text recognition datasets with better performance.


Sana Awan

Privacy-preserving Federated Learning

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Mei Liu

Abstract

Machine learning (ML) is transforming a wide range of applications, promising to bring immense economic and social benefits. However, it also raises substantial security and privacy challenges.  In this dissertation we describe a framework for efficient, collaborative and secure ML training using a federation of client devices that jointly train a ML model using their private datasets in a process called Federated Learning (FL). First, we present the design of a blockchain-enabled Privacy-preserving Federated Transfer Learning (PPFTL) framework for resource-constrained IoT applications. PPFTL addresses the privacy challenges of FL and improves efficiency and effectiveness through model personalization. The framework overcomes the computational limitation of on-device training and the communication cost of transmitting high-dimensional data or feature vectors to a server for training. Instead, the resource-constrained devices jointly learn a global model by sharing their local model updates. To prevent information leakage about the privately-held data from the shared model parameters, the individual client updates are homomorphically encrypted and aggregated in a privacy-preserving manner so that the server only learns the aggregated update to refine the global model. The blockchain provides provenance of the model updates during the training process, makes contribution-based incentive mechanisms deployable, and supports traceability, accountability and verification of the transactions so that malformed or malicious updates can be identified and traced to the offending source. The framework implements model personalization approaches (e.g. fine-tuning) to adapt the global model more closely to the individual client's data distribution.

In the second part of the dissertation, we turn our attention to the limitations of existing FL algorithms in the presence of adversarial clients who may carry out poisoning attacks against the FL model. We propose a privacy-preserving defense, named CONTRA, to mitigate data poisoning attacks and provide a guaranteed level of accuracy under attack.  The defense strategy identifies malicious participants based on the cosine similarity of their encrypted gradient contributions and removes them from FL training. We report the effectiveness of the proposed scheme for IID and non-IID data distributions. To protect data privacy, the clients' updates are combined using secure multi-party computation (MPC)-based aggregation so that the server only learns the aggregated model update without violating the privacy of users' contributions.


Dustin Hauptman

Communication Solutions for Scaling Number of Collaborative Agents in Swarm of Unmanned Aerial Systems Using Frequency Based Hierarchy

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Prasad Kulkarni, Chair
Shawn Keshmiri, (Co-Chair)
Alex Bardas
Morteza Hashemi

Abstract

Swarms of unmanned aerial systems (UASs) usage is becoming more prevalent in the world. Many private companies and government agencies are actively developing analytical and technological solutions for multi-agent cooperative swarm of UASs.  However, majority of existing research focuses on developing guidance, navigation, and control (GNC) algorithms for swarm of UASs and proof of stability and robustness of those algorithms. In addition to profound challenges in control of swarm of UASs, a reliable and fast intercommunication between UASs is one of the vital conditions for success of any swarm.  Many modern UASs have high inertia and fly at high speeds which means if latency or throughput are too low in swarms, there is a higher risk for catastrophic failure due to intercollision within the swarm. This work presents solutions for scaling number of collaborative agents in swarm of UASs using frequency-based hierarchy. This work identifies shortcomings and discusses traditional swarm communication systems and how they rely on a single frequency that will handle distribution of information to all or some parts of a swarm. These systems typically use an ad-hoc network to transfer data locally, on the single frequency, between agents without the need of existing communication infrastructure. While this does allow agents the flexibility of movement without concern for disconnecting from the network and managing only neighboring communications, it doesn’t necessarily scale to larger swarms. In those large swarms, for example, information from the outer agents will be routed to the inner agents. This will cause inner agents, critical to the stability of a swarm, to spend more time routing information than transmitting their state information. This will lead to instability as the inner agents’ states are not known to the rest of the swarm. Even if an ad-hoc network is not used (e.g. an Everyone-to-Everyone network), the frequency itself has an upper limit to the amount of data that it can send reliably before bandwidth constraints or general  interference causes information to arrive too late or not at all.

We propose that by using two frequencies and creating a hierarchy where each layer is a separate frequency, we can group large swarms into manageable local swarms. The intra-swarm communication (inside the local swarm) will be handled on a separate frequency while the inter-swarm communication will have its own. A normal mesh network was tested in both hardware in the loop (HitL) scenarios and a collision avoidance flight test scenario. Those results were compared against dual-frequency HitL simulations. The dual-frequency simulations showed overall improvement in the latency and throughput comparatively to both the simulated and flight-tested mesh network.


Brian McClannahan

Classification of Noncoding RNA Families using Deep Convolutional Neural Network

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Bo Luo
Richard Wang

Abstract

In the last decade, the discovery of noncoding RNA (ncRNA) has exploded. Classifying these ncRNA is critical to determining their function. This thesis proposes a new method employing deep convolutional neural networks (CNNs) to classify ncRNA sequences. To this end, this thesis first proposes an efficient approach to convert the RNA sequences into images characterizing their base-pairing probability. As a result, classifying RNA sequences is converted to an image classification problem that can be efficiently solved by available CNN-based classification models. This thesis also considers the folding potential of the ncRNAs in addition to their primary sequence. Based on the proposed approach, a benchmark image classification dataset is generated from the RFAM database of ncRNA sequences. In addition, three classical CNN models and three Siamese network models have been implemented and compared to demonstrate the superior performance and efficiency of the proposed approach. Extensive experimental results show the great potential of using deep learning approaches for RNA classification.


Waqar Ali

Deterministic Scheduling of Real-Time Tasks on Heterogeneous Multicore Platforms

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Heechul Yun, Chair
Esam Eldin Mohamed Aly
Drew Davidson
Prasad Kulkarni
Shawn Keshmiri

Abstract

In recent years, the problem of real-time scheduling has increasingly become more important as well as more complicated. The former is due to the proliferation of safety critical systems into our day-to-day life and the latter is caused by the escalating demand for high performance which is driving the multicore architecture towards consolidation of various kinds of heterogeneous computing resources into smaller and smaller SoCs. Motivated by these trends, this dissertation tackles the following fundamental question: how can we guarantee predictable real-time execution while preserving high utilization on heterogeneous multicore SoCs?

 

This dissertation presents new real-time scheduling techniques for predictable and efficient scheduling of mixed criticality workloads on heterogeneous SoCs. The contributions of this dissertation include the following: 1) a novel CPU-GPU scheduling framework, called BWLOCK++, that ensures predictable execution of critical GPU kernels on integrated CPU-GPU platforms 2) a novel gang scheduling framework called RT-Gang, which guarantees deterministic execution of parallel real-time tasks on the multicore CPU cluster of a heterogeneous SoC. 3) optimal and heuristic algorithms for gang formation that increase real-time schedulability under the RT-Gang framework and their extension to incorporate scheduling on accelerators in a heterogenous SoC. 4) A case-study evaluation using an open-source autonomous driving application that demonstrates the analytical and practical benefits of the proposed scheduling techniques.


Josiah Gray

Implementing TPM Commands in the Copland Remote Attestation Language

When & Where:


Zoom Meeting, please contact jgrisafe@ku.edu for link

Committee Members:

Perry Alexander, Chair
Andy Gill
Bo Luo


Abstract

So much of what we do on a daily basis is dependent on computers: email, social media, online gaming, banking, online shopping, virtual conference calls, and general web browsing to name a few. Most of the devices we depend on for these services are computers or servers that we do not own, nor do we have direct physical access to. We trust the underlying network to provide access to these devices remotely. But how do we know which computers/servers are safe to access, or verify that they are who they claim to be? How do we know that a distant server has not been hacked and compromised in some way?

Remote attestation is a method for establishing trust between remote systems. An "appraiser" can request information from a "target" system. The target responds with "evidence" consisting of run-time measurements, configuration information, and/or cryptographic information (i.e. hashes, keys, nonces, or other shared secrets). The appraiser can then evaluate the returned evidence to confirm the identity of the remote target, as well as determine some information about the operational state of the target, to decide whether or not the target is trustworthy.

A tool that may prove useful in remote attestation is the TPM, or "Trusted Platform Module". The TPM is a dedicated microcontroller that comes built-in to nearly all PC and laptop systems produced today. The TPM is used as a root of trust for storage and reporting, primarily through integrated cryptographic keys. This root of trust can then be used to assure the integrity of stored data or the state of the system itself. In this thesis, I will explore the various functions of the TPM and how they may be utilized in the development of the remote attestation language, "Copland".