Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

No upcoming defense notices for now!

Past Defense Notices

Dates

Tianxiao Zhang

Efficient and Effective Convolutional Neural Networks for Object Detection and Recognition

When & Where:


Nichols Hall, Room 246

Committee Members:

Bo Luo, Chair
Prasad Kulkarni
Fengjun Li
Cuncong Zhong
Guanghui Wang

Abstract

With the development of Convolutional Neural Networks (CNNs), computer vision enters a new era and the performance of image classification, object detection, segmentation, and recognition has been significantly improved. Object detection, as one of the fundamental problems in computer vision, is a necessary component of many computer vision tasks, such as image and video understanding, object tracking, instance segmentation, etc. In object detection, we need to not only recognize all defined objects in images or videos but also localize these objects, making it difficult to perfectly realize in real-world scenarios.

In this work, we aim to improve the performance of object detection and localization by adopting more efficient and effective CNN models. (1) We propose an effective and efficient approach for real-time detection and tracking of small golf balls based on object detection and the Kalman filter. For this purpose, we have collected and labeled thousands of golf ball images to train the learning model. We also implemented several classical object detection models and compared their performance in terms of detection precision and speed. (2) To address the domain shift problem in object detection, we propose to employ generative adversarial networks (GANs) to generate new images in different domains and then concatenate the original RGB images and their corresponding GAN-generated fake images to form a 6-channel representation of the image content. (3) We propose a strategy to improve label assignment in modern object detection models. The IoU (Intersection over Union) thresholds between the pre-defined anchors and the ground truth bounding boxes are significant to the definition of the positive and negative samples. Instead of using fixed thresholds or adaptive thresholds based on statistics, we introduced the predictions into the label assignment paradigm to dynamically define positive samples and negative samples so that more high-quality samples could be selected as positive samples. The strategy reduces the discrepancy between the classification scores and the IoU scores and yields more accurate bounding boxes.


Xiangyu Chen

Toward Data Efficient Learning in Computer Vision

When & Where:


Nichols Hall, Room 246

Committee Members:

Cuncong Zhong, Chair
Prasad Kulkarni
Fengjun Li
Bo Luo
Guanghui Wang

Abstract

Deep learning leads the performance in many areas of computer vision. Deep neural networks usually require a large amount of data to train a good model with the growing number of parameters. However, collecting and labeling a large dataset is not always realistic, e.g. to recognize rare diseases in the medical field. In addition, both collecting and labeling data are labor-intensive and time-consuming. In contrast, studies show that humans can recognize new categories with even a single example, which is apparently in the opposite direction of current machine learning algorithms. Thus, data-efficient learning, where the labeled data scale is relatively small, has attracted increased attention recently. According to the key components of machine learning algorithms, data-efficient learning algorithms can also be divided into three folders, data-based, model-based, and optimization-based. In this study, we investigate two data-based models and one model-based approach.

First, to collect more data to increase data quantity. The most direct way for data-efficient learning is to generate more data to mimic data-rich scenarios. To achieve this, we propose to integrate both spatial and Discrete Cosine Transformation (DCT) based frequency representations to finetune the classifier. In addition to the quantity, another property of data is the quality to the model, different from the quality to human eyes. As language carries denser information than natural images. To mimic language, we propose to explicitly increase the input information density in the frequency domain. The goal of model-based methods in data-efficient learning is mainly to make models converge faster. After carefully examining the self-attention modules in Vision Transformers, we discover that trivial attention covers useful non-trivial attention due to its large amount. To solve this issue, we proposed to divide attention weights into trivial and non-trivial ones by thresholds and suppress the accumulated trivial attention weights. Extensive experiments have been performed to demonstrate the effectiveness of the proposed models.


Yousif Dafalla

Web-Armour: Mitigating Reconnaissance and Vulnerability Scanning with Injecting Scan-Impeding Delays in Web Deployments

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
ZJ Wang

Abstract

Scanning hosts on the internet for vulnerable devices and services is a key step in numerous cyberattacks. Previous work has shown that scanning is a widespread phenomenon on the internet and commonly targets web application/server deployments. Given that automated scanning is a crucial step in many cyberattacks, it would be beneficial to make it more difficult for adversaries to perform such activity.

In this work, we propose Web-Armour, a mitigation approach to adversarial reconnaissance and vulnerability scanning of web deployments. The proposed approach relies on injecting scanning impeding delays to infrequently or rarely used portions of a web deployment. Web-Armour has two goals: First, increase the cost for attackers to perform automated reconnaissance and vulnerability scanning; Second, introduce minimal to negligible performance overhead to benign users of the deployment. We evaluate Web-Armour on live environments, operated by real users, and on different controlled (offline) scenarios. We show that Web-Armour can effectively lead to thwarting reconnaissance and internet-wide scanning.


Sandhya Kandaswamy

An Empirical Evaluation of Multi-Resource Scheduling for Moldable Workflows

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Suzanne Shontz
Heechul Yun


Abstract

Resource scheduling plays a vital role in High-Performance Computing (HPC) systems. However, most scheduling research in HPC has focused on only a single type of resource (e.g., computing cores or I/O resources). With the advancement in hardware architectures and the increase in data-intensive HPC applications, there is a need to simultaneously embrace a diverse set of resources (e.g., computing cores, cache, memory, I/O, and network resources) in the design of runtime schedulers for improving the overall application performance. This thesis performs an empirical evaluation of a recently proposed multi-resource scheduling algorithm for minimizing the overall completion time (or makespan) of computational workflows comprised of moldable parallel jobs. Moldable parallel jobs allow the scheduler to select the resource allocations at launch time and thus can adapt to the available system resources (as compared to rigid jobs) while staying easy to design and implement (as compared to malleable jobs). The algorithm was proven to have a worst-case approximation ratio that grows linearly with the number of resource types for moldable workflows. In this thesis, a comprehensive set of simulations is conducted to empirically evaluate the performance of the algorithm using synthetic workflows generated by DAGGEN and moldable jobs that exhibit different speedup profiles. The results show that the algorithm fares better than the theoretical bound predicts, and it consistently outperforms two baseline heuristics under a variety of parameter settings, illustrating its robust practical performance.


Bernaldo Luc

FPGA Implementation of an FFT-Based Carrier Frequency Estimation Algorithm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Rongqing Hui


Abstract

Carrier synchronization is an essential part of digital communication systems. In essence, carrier synchronization is the process of estimating and correcting any carrier phase and frequency differences between the transmitted and received signals. Typically, carrier synchronization is achieved using a phase lock loop (PLL) system; however, this method is unreliable when experiencing frequency offsets larger than 30 kHz. This thesis evaluates the FPGA implementation of a combined FFT and PLL-based carrier phase synchronization system. The algorithm includes non-data-aided, FFT-based, frequency estimator used to initialize a data-aided, PLL-based phase estimator. The frequency estimator algorithm employs a resource-efficient strategy of averaging several small FFTs instead of using one large FFT, which results in a rough estimate of the frequency offset. Since it is initialized with a rough frequency estimate, this hybrid design allows the PLL to start in a state close to frequency lock and focus mainly on phase synchronization. The results show that the algorithm demonstrates comparable performance, based on performance metrics such as bit-error rate (BER) and estimator error variance, to alternative frequency estimation strategies and simulation models. Moreover, the FFT-initialized PLL approach improves the frequency acquisition range of the PLL while achieving similar BER performance as the PLL-only system.


Rakshitha Vidhyashankar

An empirical study of temporal knowledge graph and link prediction using longitudinal editorial data

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Natural Language Processing (NLP) is an application of Machine Learning (ML) which focuses on deriving useful and underlying facts through the semantics in articles to automatically extract insights about how information can be pictured, presented, and interpreted.  Knowledge graphs, as a promising medium for carrying the structured linguistical piece, can be a desired target for learning and visualization through artificial neural networks, in order to identify the absent information and understand the hidden transitive relationship among them. In this study, we aim to construct Temporal Knowledge Graphs of sematic information to facilitate better visualization of editorial data. Further, A neural network-based approach for link prediction is carried out on the constructed knowledge graphs. This study uses news articles in English language, from New York Times (NYT) collected over a period of time for experiments. The sentences in these articles can be decomposed into Part-Of-Speech (POS) Tags to give a triple t = {sub, pred, obj}. A directed Graph G (V, E) is constructed using POS tags, such that the Set of Vertices is the grammatical constructs that appear in the sentence and the Set of Edges is the directed relation between the constructs. The main challenge that arises with knowledge graphs is the storage constraints that arise in lieu of storing the graph information. The study proposes ways by which this can be handled. Once these graphs are constructed, a neural architecture is trained to learn the graph embeddings which can be utilized to predict the potentially missing links which are transitive in nature. The results are evaluated using learning-to-rank metrics such Mean Reciprocal Rank (MRR). 


Jace Kline

A Framework for Assessing Decompiler Inference Accuracy of Source-Level Program Constructs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Bo Luo


Abstract

Decompilation is the process of reverse engineering a binary program into an equivalent source code representation with the objective to recover high-level program constructs such as functions, variables, data types, and control flow mechanisms. Decompilation is applicable in many contexts, particularly for security analysts attempting to decipher the construction and behavior of malware samples. However, due to the loss of information during compilation, this process is naturally speculative and thus is prone to inaccuracy. This inherent speculation motivates the idea of an evaluation framework for decompilers.

In this work, we present a novel framework to quantitatively evaluate the inference accuracy of decompilers, regarding functions, variables, and data types. Within our framework, we develop a domain-specific language (DSL) for representing such program information from any "ground truth" or decompiler source. Using our DSL, we implement a strategy for comparing ground truth and decompiler representations of the same program. Subsequently, we extract and present insightful metrics illustrating the accuracy of decompiler inference regarding functions, variables, and data types, over a given set of benchmark programs. We leverage our framework to assess the correctness of the Ghidra decompiler when compared to ground truth information scraped from DWARF debugging information. We perform this assessment over a subset of the GNU Core Utilities (Coreutils) programs and discuss our findings.


Jaypal Singh

EvalIt: Skill Evaluation using block chain

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
David Johnson
Hongyang Sun


Abstract

Skills validation is a key issue when hiring workers. Companies and universities often face difficulties in determining an applicant's skills because certification of the skills claimed by an applicant is usually not readily verifiable and verification is costly. Also, from applicant's perspective, skill evaluation from industry expert is valuable instead of learning a generalized course with certification. Most of the certification programs are easy and proved not so fruitful in learning the required work skills. Blockchain has been proposed in the literature for functional verification and tamper-proof information storage in a decentralized way. "EvalIt" is a blockchain-based Dapp that addresses the above issues and guarantees some desirable properties. The Dapp facilitates skill evaluation efforts through payments using tokens that it collects from payments made by users of the platform.


Soma Pal

Properties of Profile-guided Compiler Optimization with GCC and LLVM

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Mohammad Alian
Tamzidul Hoque


Abstract

Profile-guided optimizations (PGO) are a class of sophisticated compiler transformations that employ information regarding the profile or execution time behavior of a program to improve program performance, typically speed. PGOs for popular language platforms, like C, C++, and Java, are generally regarded as a mature and mainstream technology and are supported by most standard compilers. Consequently, properties and characteristics of PGOs are assumed to be established and known but have rarely been systematically studied with multiple mainstream compilers.

The goal of this work is to explore and report some important properties of PGOs in mainstream compilers, specifically GCC and LLVM in this work. We study the performance delivered by PGOs at the program and function-level, impact of different execution profiles on PGO performance, and compare relative PGO benefit delivered by different mainstream compilers. We also built the experimental framework to conduct this research. We expect that our work will help focus future research and assist in building frameworks to field PGOs in actual systems.


Samyak Jain

Monkeypox Detection Using Computer Vision

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
David Johnson, (Co-Chair)
Hongyang Sun


Abstract

As the world recovers from the damage caused by the spread of COVID-19, the monkeypox virus poses a new threat of becoming a global pandemic. The monkeypox virus itself is not as deadly or contagious as COVID-19, but many countries report new patient cases every day. So it wouldn't be surprising if the world faces another pandemic due to lack of proper precautions. Recently, deep learning has shown great potential in image-based diagnostics, such as cancer detection, tumor cell identification, and COVID-19 patient detection. Therefore, since monkeypox has infected human skin, a similar application can be employed in diagnosing monkeypox-related diseases, and this image can be captured and used for further disease diagnosis. This project presents a deep learning approach for detecting monkeypox disease from skin lesion images. Several pre-trained deep learning models, such as ResNet50 and Mobilenet, are deployed on the dataset to classify monkeypox and other diseases.