Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Gordon Ariho

MULTIPASS SAR PROCESSING FOR ICE SHEET VERTICAL VELOCITY AND TOMOGRAPHY MEASUREMENTS

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

James Stiles, Chair
John Paden (Co-Chair)
Christopher Allen
Shannon Blunt
Emily Arnold

Abstract

Vertical velocity is the rate at which ice moves vertically within an ice sheet, usually measured in meters per year. This movement can occur due to various factors, including accumulation, ice deformation, basal sliding, and subglacial melting. The measurement of vertical velocities within the ice sheet can assist in determining the age of the ice and assessing the rheology of the ice, thereby mitigating uncertainties due to analytical approximations of ice flow models.

We apply differential interferometric synthetic aperture radar (DInSAR) techniques to data from the Multichannel Coherent Radar Depth Sounder (MCoRDS) to measure the vertical displacement of englacial layers within an ice sheet. DInSAR’s accuracy is usually on the order of a small fraction of the wavelength (e.g., millimeter to centimeter precision is typical) in monitoring displacement along the radar line of sight (LOS). Ground-based Autonomous phase-sensitive Radio-Echo Sounder (ApRES) units have demonstrated the ability to precisely measure the relative vertical velocity by taking multiple measurements from the same location on the ice. Airborne systems can make a similar measurement but can suffer from spatial baseline errors since it is generally impossible to fly over the same stretch of ice on each pass with enough precision to ignore the spatial baseline. In this work, we compensate for spatial baseline errors using precise trajectory information and estimates of the cross-track layer slope using direction of arrival estimation. The current DInSAR algorithm is applied to airborne radar depth sounder data to produce results for flights near Summit camp and the EGIG (Expéditions Glaciologiques Internationales au Groenland) line in Greenland using the CReSIS toolbox. The current approach estimates the baseline error in multiple steps. Each step has dependencies on all the values to be estimated. To overcome this drawback, we have implemented a maximum likelihood estimator that jointly estimates the vertical velocity, the cross-track internal layer slope, and the unknown baseline error due to GPS and INS (Inertial Navigation System) errors. We incorporate the Lliboutry parametric model for vertical velocity into the maximum likelihood estimator framework.

To improve the direction of arrival estimation, we explore the use of focusing matrices against other wideband direction of arrival methods, such as wideband MLE, wideband MUSIC, and wideband MVDR, by comparing the mean squared error of the DOA estimates.

 


Dalton Brucker-Hahn

Mishaps in Microservices: Improving Microservice Architecture Security Through Novel Service Mesh Capabilities

When & Where:


Nichols Hall, Room 129, Ron Evans Apollo Auditorium

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Huazhen Fang

Abstract

Shifting trends in modern software engineering and cloud computing have pushed system designs to leverage containerization and develop their systems into microservice architectures. While microservice architectures emphasize scalability and ease-of-development, the issue of microservice explosion has emerged, stressing hosting environments and generating new challenges within this domain.  Service meshes, the latest in a series of developments, are being adopted to meet these needs. Service meshes provide separation of concerns between microservice development and the operational concerns of microservice deployments, such as service discovery and networking. However, despite the benefits provided by service meshes, the security demands of this domain are unmet by the current state-of-art offerings.

 

Through a series of experimental trials in a service mesh testbed, we demonstrate a need for improved security mechanisms in the state-of-art offerings of service meshes.  After deriving a series of domain-conscious recommendations to improve the longevity and flexibility of service meshes, we design and implement our proof-of-concept service mesh system ServiceWatch.  By leveraging a novel verification-in-the-loop scheme, we provide the capability for service meshes to provide holistic monitoring and management of the microservice deployments they host. Further, through frequent, automated rotations of security artifacts (keys, certificates, and tokens), we allow the service mesh to automatically isolate and remove microservices that violate the defined network policies of the service mesh, requiring no system administrator intervention. Extending this proof-of-concept environment, we design and implement a prototype workflow called CloudCoverCloudCover incorporates our verification-in-the-loop scheme and leverages existing tools, allowing easy adoption of these novel security mechanisms into modern systems.  Under a realistic and relevant threat model, we show how our design choices and improvements are both necessary and beneficial to real-world deployments. By examining network packet captures, we provide a theoretical analysis of the scalability of these solutions in real-world networks.  We further extend these trials experimentally using an independently managed and operated cloud environment to demonstrate the practical scalability of our proposed designs to large-scale software systems. Our results indicate that the overhead introduced by ServiceWatch and CloudCover are acceptable for real-world deployments. Additionally, the security capabilities provided effectively mitigate threats present within these environments.


Justinas Lialys

Parametrically Resonant Surface Plasmon Polaritons

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Alessandro Salandrino, Chair
Kenneth Demarest
Shima Fardad
Rongqing Hui
Xinmai Yang

Abstract

The surface electromagnetic waves that propagate along a metal-dielectric or a metal-air interface are called surface plasmon polaritons (SPPs). However, as the tangential wavevector component is larger than what is permitted for the homogenous plane wave in the dielectric medium this poses a phase-matching issue. In other words, the available spatial vector in the dielectric at a given frequency is smaller than what is required by SPP to be excited. The most commonly known technique to bypass this problem is by using the Otto and Kretschmann configurations. A glass prism is used to increase the available spatial vector in dielectric/air. Other methods are evanescent field directional coupling and optical grating. Even with all these methods, it is still challenging to couple the SPPs having a large propagation constant.  

A novel way to efficiently inject the power into SPPs is via temporal modulation of the dielectric adhered to the metal. The dielectric constant is modulated in time using an incident pump field. As a result of the induced changes in the dielectric constant, spatial vector shortage is eliminated. In other words, there is enough spatial vector in the dielectric to excite SPPs. As SPPs applicability is widely studied in numerous applications, this method gives a new way of evoking SPPs. Hence, this technique opens new possibilities in the surface plasmon polariton study. One of the applications that we discuss in details is the optical limiting.  


Thomas Kramer

Time-Frequency Analysis of Waveform Diverse Designs

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Shannon Blunt, Chair
Victor Frost
James Stiles


Abstract

Waveform diversity desires to optimize the Radar waveform given the constraints and objectives of a particular task or scenario. Recent advances in electronics have significantly expanded the design space of waveforms. The resulting waveforms of various waveform diverse approaches possess complex structures which have temporal, spectral, and spatial extents. The utilization of optimization in many of these approaches results in complex signal structures that are not imagined a priori, but are instead the product of algorithms. Traditional waveform analysis using the frequency spectrum, autocorrelation, and beampatterns of waveforms provide the majority of metrics of interest. But as these new waveforms’ structure increases in complexity, and the constraints of their use tighten, further aspects of the waveform’s structure must be considered, especially the true occupancy of the waveforms in the transmission hyperspace. Time-Frequency analysis can be applied to these waveforms to better understand their behavior and to inform future design. These tools are especially useful for spectrally shaped random FM waveforms as well as spatially shaped spatial beams. Both linear and quadratic transforms are used to study the emissions in time, frequency, and space dimensions. Insight on waveform generation is observed and future design opportunities are identified.


Past Defense Notices

Dates

Vincent Occhiogrosso

Development of Low-Cost Microwave and RF Modules for Compact, Fine-Resolution FMCW Radars

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Christopher Allen, Chair
Fernando Rodriguez-Morales, (Co-Chair)
Carl Leuschen


Abstract

The Center for Remote Sensing and Integrated Systems (CReSIS) has enabled the development of several radars for measuring ice and snow depth. One of these systems is the Ultra-Wideband (UWB) Snow Radar, which operates in microwave range and can provide measurements with cm-scale vertical resolution. To date, renditions of this system demand medium to high size, weight and power (SWaP) characteristics. To facilitate a more flexible and mobile measurement setup with these systems, it became necessary to reduce the SWaP of the radar electronics. This thesis focuses on the design of several compact RF and microwave modules enabling integration of a full UWB radar system weighing < 5 lbs and consuming < 30 W of DC power. This system is suitable for operation over either 12-18 GHz or 2-8 GHz in platforms with low SWaP requirements, such as unmanned aerial systems (UAS). The modules developed as a part of this work include a VCO-based chirp generation module, downconverter modules, and a set of modules for a receiver front end, each implemented on a low-cost laminate substrate. The chirp generator uses a Phase Locked Loop (PLL) based on an architecture previously developed at CReSIS and offers a small form factor with a frequency non-linearity of 0.0013% across the operating bandwidth (12-18 GHz) using sub-millisecond pulse durations. The down-conversion modules were created to allow for system operation in the S/C frequency band (2-8 GHz) as well as the default Ku band (12-18 GHz). Additionally, an RF receiver front end was designed, which includes a microwave receiver module for de-chirping and an IF module for signal conditioning before digitization. The compactness of the receiver modules enabled the demonstration of multi-channel data acquisition without multiplexing from two different aircraft. A radar test-bed largely based on this compact system was demonstrated in the laboratory and used as part of a dual-frequency instrument for a surface-based experiment in Antarctica. The laboratory performance of the miniaturized radar is comparable to the legacy 2-8 GHz snow radar and 12-18 GHz Ku-band radar systems. The 2-8 GHz system is currently being integrated into a class-I UAS. 


Tianxiao Zhang

Efficient and Effective Convolutional Neural Networks for Object Detection and Recognition

When & Where:


Nichols Hall, Room 246

Committee Members:

Bo Luo, Chair
Prasad Kulkarni
Fengjun Li
Cuncong Zhong
Guanghui Wang

Abstract

With the development of Convolutional Neural Networks (CNNs), computer vision enters a new era and the performance of image classification, object detection, segmentation, and recognition has been significantly improved. Object detection, as one of the fundamental problems in computer vision, is a necessary component of many computer vision tasks, such as image and video understanding, object tracking, instance segmentation, etc. In object detection, we need to not only recognize all defined objects in images or videos but also localize these objects, making it difficult to perfectly realize in real-world scenarios.

In this work, we aim to improve the performance of object detection and localization by adopting more efficient and effective CNN models. (1) We propose an effective and efficient approach for real-time detection and tracking of small golf balls based on object detection and the Kalman filter. For this purpose, we have collected and labeled thousands of golf ball images to train the learning model. We also implemented several classical object detection models and compared their performance in terms of detection precision and speed. (2) To address the domain shift problem in object detection, we propose to employ generative adversarial networks (GANs) to generate new images in different domains and then concatenate the original RGB images and their corresponding GAN-generated fake images to form a 6-channel representation of the image content. (3) We propose a strategy to improve label assignment in modern object detection models. The IoU (Intersection over Union) thresholds between the pre-defined anchors and the ground truth bounding boxes are significant to the definition of the positive and negative samples. Instead of using fixed thresholds or adaptive thresholds based on statistics, we introduced the predictions into the label assignment paradigm to dynamically define positive samples and negative samples so that more high-quality samples could be selected as positive samples. The strategy reduces the discrepancy between the classification scores and the IoU scores and yields more accurate bounding boxes.


Xiangyu Chen

Toward Data Efficient Learning in Computer Vision

When & Where:


Nichols Hall, Room 246

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Fengjun Li
Bo Luo
Guanghui Wang

Abstract

Deep learning leads the performance in many areas of computer vision. Deep neural networks usually require a large amount of data to train a good model with the growing number of parameters. However, collecting and labeling a large dataset is not always realistic, e.g. to recognize rare diseases in the medical field. In addition, both collecting and labeling data are labor-intensive and time-consuming. In contrast, studies show that humans can recognize new categories with even a single example, which is apparently in the opposite direction of current machine learning algorithms. Thus, data-efficient learning, where the labeled data scale is relatively small, has attracted increased attention recently. According to the key components of machine learning algorithms, data-efficient learning algorithms can also be divided into three folders, data-based, model-based, and optimization-based. In this study, we investigate two data-based models and one model-based approach.

First, to collect more data to increase data quantity. The most direct way for data-efficient learning is to generate more data to mimic data-rich scenarios. To achieve this, we propose to integrate both spatial and Discrete Cosine Transformation (DCT) based frequency representations to finetune the classifier. In addition to the quantity, another property of data is the quality to the model, different from the quality to human eyes. As language carries denser information than natural images. To mimic language, we propose to explicitly increase the input information density in the frequency domain. The goal of model-based methods in data-efficient learning is mainly to make models converge faster. After carefully examining the self-attention modules in Vision Transformers, we discover that trivial attention covers useful non-trivial attention due to its large amount. To solve this issue, we proposed to divide attention weights into trivial and non-trivial ones by thresholds and suppress the accumulated trivial attention weights. Extensive experiments have been performed to demonstrate the effectiveness of the proposed models.


Yousif Dafalla

Web-Armour: Mitigating Reconnaissance and Vulnerability Scanning with Injecting Scan-Impeding Delays in Web Deployments

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
ZJ Wang

Abstract

Scanning hosts on the internet for vulnerable devices and services is a key step in numerous cyberattacks. Previous work has shown that scanning is a widespread phenomenon on the internet and commonly targets web application/server deployments. Given that automated scanning is a crucial step in many cyberattacks, it would be beneficial to make it more difficult for adversaries to perform such activity.

In this work, we propose Web-Armour, a mitigation approach to adversarial reconnaissance and vulnerability scanning of web deployments. The proposed approach relies on injecting scanning impeding delays to infrequently or rarely used portions of a web deployment. Web-Armour has two goals: First, increase the cost for attackers to perform automated reconnaissance and vulnerability scanning; Second, introduce minimal to negligible performance overhead to benign users of the deployment. We evaluate Web-Armour on live environments, operated by real users, and on different controlled (offline) scenarios. We show that Web-Armour can effectively lead to thwarting reconnaissance and internet-wide scanning.


Sandhya Kandaswamy

An Empirical Evaluation of Multi-Resource Scheduling for Moldable Workflows

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Suzanne Shontz
Heechul Yun


Abstract

Resource scheduling plays a vital role in High-Performance Computing (HPC) systems. However, most scheduling research in HPC has focused on only a single type of resource (e.g., computing cores or I/O resources). With the advancement in hardware architectures and the increase in data-intensive HPC applications, there is a need to simultaneously embrace a diverse set of resources (e.g., computing cores, cache, memory, I/O, and network resources) in the design of runtime schedulers for improving the overall application performance. This thesis performs an empirical evaluation of a recently proposed multi-resource scheduling algorithm for minimizing the overall completion time (or makespan) of computational workflows comprised of moldable parallel jobs. Moldable parallel jobs allow the scheduler to select the resource allocations at launch time and thus can adapt to the available system resources (as compared to rigid jobs) while staying easy to design and implement (as compared to malleable jobs). The algorithm was proven to have a worst-case approximation ratio that grows linearly with the number of resource types for moldable workflows. In this thesis, a comprehensive set of simulations is conducted to empirically evaluate the performance of the algorithm using synthetic workflows generated by DAGGEN and moldable jobs that exhibit different speedup profiles. The results show that the algorithm fares better than the theoretical bound predicts, and it consistently outperforms two baseline heuristics under a variety of parameter settings, illustrating its robust practical performance.


Bernaldo Luc

FPGA Implementation of an FFT-Based Carrier Frequency Estimation Algorithm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Rongqing Hui


Abstract

Carrier synchronization is an essential part of digital communication systems. In essence, carrier synchronization is the process of estimating and correcting any carrier phase and frequency differences between the transmitted and received signals. Typically, carrier synchronization is achieved using a phase lock loop (PLL) system; however, this method is unreliable when experiencing frequency offsets larger than 30 kHz. This thesis evaluates the FPGA implementation of a combined FFT and PLL-based carrier phase synchronization system. The algorithm includes non-data-aided, FFT-based, frequency estimator used to initialize a data-aided, PLL-based phase estimator. The frequency estimator algorithm employs a resource-efficient strategy of averaging several small FFTs instead of using one large FFT, which results in a rough estimate of the frequency offset. Since it is initialized with a rough frequency estimate, this hybrid design allows the PLL to start in a state close to frequency lock and focus mainly on phase synchronization. The results show that the algorithm demonstrates comparable performance, based on performance metrics such as bit-error rate (BER) and estimator error variance, to alternative frequency estimation strategies and simulation models. Moreover, the FFT-initialized PLL approach improves the frequency acquisition range of the PLL while achieving similar BER performance as the PLL-only system.


Rakshitha Vidhyashankar

An empirical study of temporal knowledge graph and link prediction using longitudinal editorial data

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Natural Language Processing (NLP) is an application of Machine Learning (ML) which focuses on deriving useful and underlying facts through the semantics in articles to automatically extract insights about how information can be pictured, presented, and interpreted.  Knowledge graphs, as a promising medium for carrying the structured linguistical piece, can be a desired target for learning and visualization through artificial neural networks, in order to identify the absent information and understand the hidden transitive relationship among them. In this study, we aim to construct Temporal Knowledge Graphs of sematic information to facilitate better visualization of editorial data. Further, A neural network-based approach for link prediction is carried out on the constructed knowledge graphs. This study uses news articles in English language, from New York Times (NYT) collected over a period of time for experiments. The sentences in these articles can be decomposed into Part-Of-Speech (POS) Tags to give a triple t = {sub, pred, obj}. A directed Graph G (V, E) is constructed using POS tags, such that the Set of Vertices is the grammatical constructs that appear in the sentence and the Set of Edges is the directed relation between the constructs. The main challenge that arises with knowledge graphs is the storage constraints that arise in lieu of storing the graph information. The study proposes ways by which this can be handled. Once these graphs are constructed, a neural architecture is trained to learn the graph embeddings which can be utilized to predict the potentially missing links which are transitive in nature. The results are evaluated using learning-to-rank metrics such Mean Reciprocal Rank (MRR). 


Jace Kline

A Framework for Assessing Decompiler Inference Accuracy of Source-Level Program Constructs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Bo Luo


Abstract

Decompilation is the process of reverse engineering a binary program into an equivalent source code representation with the objective to recover high-level program constructs such as functions, variables, data types, and control flow mechanisms. Decompilation is applicable in many contexts, particularly for security analysts attempting to decipher the construction and behavior of malware samples. However, due to the loss of information during compilation, this process is naturally speculative and thus is prone to inaccuracy. This inherent speculation motivates the idea of an evaluation framework for decompilers.

In this work, we present a novel framework to quantitatively evaluate the inference accuracy of decompilers, regarding functions, variables, and data types. Within our framework, we develop a domain-specific language (DSL) for representing such program information from any "ground truth" or decompiler source. Using our DSL, we implement a strategy for comparing ground truth and decompiler representations of the same program. Subsequently, we extract and present insightful metrics illustrating the accuracy of decompiler inference regarding functions, variables, and data types, over a given set of benchmark programs. We leverage our framework to assess the correctness of the Ghidra decompiler when compared to ground truth information scraped from DWARF debugging information. We perform this assessment over a subset of the GNU Core Utilities (Coreutils) programs and discuss our findings.


Jaypal Singh

EvalIt: Skill Evaluation using block chain

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
David Johnson
Hongyang Sun


Abstract

Skills validation is a key issue when hiring workers. Companies and universities often face difficulties in determining an applicant's skills because certification of the skills claimed by an applicant is usually not readily verifiable and verification is costly. Also, from applicant's perspective, skill evaluation from industry expert is valuable instead of learning a generalized course with certification. Most of the certification programs are easy and proved not so fruitful in learning the required work skills. Blockchain has been proposed in the literature for functional verification and tamper-proof information storage in a decentralized way. "EvalIt" is a blockchain-based Dapp that addresses the above issues and guarantees some desirable properties. The Dapp facilitates skill evaluation efforts through payments using tokens that it collects from payments made by users of the platform.


Soma Pal

Properties of Profile-guided Compiler Optimization with GCC and LLVM

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Mohammad Alian
Tamzidul Hoque


Abstract

Profile-guided optimizations (PGO) are a class of sophisticated compiler transformations that employ information regarding the profile or execution time behavior of a program to improve program performance, typically speed. PGOs for popular language platforms, like C, C++, and Java, are generally regarded as a mature and mainstream technology and are supported by most standard compilers. Consequently, properties and characteristics of PGOs are assumed to be established and known but have rarely been systematically studied with multiple mainstream compilers.

The goal of this work is to explore and report some important properties of PGOs in mainstream compilers, specifically GCC and LLVM in this work. We study the performance delivered by PGOs at the program and function-level, impact of different execution profiles on PGO performance, and compare relative PGO benefit delivered by different mainstream compilers. We also built the experimental framework to conduct this research. We expect that our work will help focus future research and assist in building frameworks to field PGOs in actual systems.