Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Past Defense Notices

Dates

Lokesh Kaki

An Automatic Image Stitching Software with Customizable Parameters and a Graphical User Interface

When & Where:


2001 B Eaton Hall

Committee Members:

Richard Wang, Chair
Esam El-Araby
Jerzy Grzymala-Busse


Abstract

Image stitching is one of the most widely used Computer Vision algorithms with a broad range of applications, such as image stabilization, high-resolution photomosaics, object insertion, 3D image reconstruction, and satellite imaging. The process of extracting image features from each input image,  determining the image matches, and then estimating the homography for each matched image is the necessary procedure for most of the feature-based image stitching techniques. In recent years, several state-of-the-art techniques like scale-invariant feature transform (SIFT), random sample consensus (RANSAC), and direct linear transformation (DLT) have been proposed for feature detection, extraction, matching, and homography estimation. However, using these algorithms with fixed parameters does not usually work well in creating seamless, natural-looking panoramas. The set of parameter values which work best for specific images may not work equally well for another set of images taken by a different camera or in varied conditions. Hence, the parameter tuning is as important as choosing the right set of algorithms for the efficient performance of any image stitching algorithm.

In this project, a graphical user interface is designed and programmed to tune a total of 32 parameters, including some of the basic ones such as straitening, cropping, setting the maximum output image size, and setting the focal length.  It also contains several advanced parameters like specifying the number of RANSAC iterations, RANSAC inlier threshold, extrema threshold, Gaussian window size, etc. The image stitching algorithm used in this project comprises of SIFT, DLT, RANSAC, warping, straightening, bundle adjustment, and blending techniques. Once the given images are stitched together, the output image can be further analyzed inside the user interface by clicking on any particular point. Then, it returns the corresponding input image, which contributed to the selected point, and its GPS coordinates, altitude, and camera focal length given by its metadata. The developed software has been successfully tested on various diverse datasets, and the customized parameters with corresponding results, as well as timer logs are tabulated in this report. The software is built for both Windows and Linux operating systems as part of this project.

 


Mohammad Isyroqi Fathan

Comparative Study on Polyp Localization and Classification on Colonoscopy Video

When & Where:


250 Nichols Hall

Committee Members:

Guanghui Wang, Chair
Bo Luo
James Miller


Abstract

Colorectal cancer is one of the most common types of cancer with a high mortality rate. It typically develops from small clumps of benign cells called polyp. The adenomatous polyp has a higher chance of developing into cancer compared to the hyperplastic polyp. Colonoscopy is the preferred procedure for colorectal cancer screening and to minimize its risk by performing a biopsy on found polyps. Thus, a good polyp detection model can assist physicians and increase the effectiveness of colonoscopy. Several models using handcrafted features and deep learning approaches have been proposed for the polyp detection task.  

In this study, we compare the performances of the previous state-of-the-art general object detection models for polyp detection and classification (into adenomatous and hyperplastic class).  Specifically, we compare the performances of FasterRCNN, SSD, YOLOv3, RefineDet, RetinaNet, and FasterRCNN with DetNet backbone. This comparative study serves as an initial analysis of the effectiveness of these models and to choose a base model that we will improve further for polyp detection.


Lei Wang

I Know What You Type on Your Phone: Keystroke Inference on Android Device Using Deep Learning

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Fengjun Li
Guanghui Wang


Abstract

Given a list of smartphone sensor readings, such as accelerometer, gyroscope and light sensor, is there enough information present to predict a user’s input without access to either the raw text or keyboard log? With the increasing usage of smartphones as personal devices to access sensitive information on-the-go has put user privacy at risk. As the technology advances rapidly, smartphones now equip multiple sensors to measure user motion, temperature and brightness to provide constant feedback to applications in order to receive accurate and current weather forecast, GPS information and so on. In the ecosystem of Android, sensor reading can be accessed without user permissions and this makes Android devices vulnerable to various side-channel attacks.

In this thesis, we first create a native Android app to collect approximately 20700 keypresses from 30 volunteers. The text used for the data collection is carefully selected based on the bigram analysis we run on over 1.3 million tweets. We then present two approaches (single key press and bigram) for feature extraction, those features are constructed using accelerometer, gyroscope and light sensor readings. A deep neural network with four hidden layers is proposed as the baseline for this work, which achieves an accuracy of 47% using categorical cross entropy as the accuracy metric. A multi-view model then is proposed in the later work and multiple views are extracted and performance of the combination of each view is compared for analysis.


Wenchi Ma

Deep Neural Network based Object Detection and Regularization in Deep Learning

When & Where:


246 Nichols Hall

Committee Members:

Richard Wang, Chair
Arvin Agah
Bo Luo
Heechul Yun
Haiyang Chao

Abstract

The abilities of feature learning, scene understanding, and task generalization are the consistent pursuit in deep learning-based computer vision. A number of object detectors with various network structures and algorithms have been proposed to learn more effective features, to extract more contextual and semantic information, and to achieve more robust and more accurate performance on different datasets. Nevertheless, the problem is still not well addressed in practical applications. One major issue lies in the inefficient feature learning and propagation in challenging situations like small objects, occlusion, illumination, etc. Another big issue is the poor generalization ability on datasets with different feature distribution. 

The study aims to explore different learning frameworks and strategies to solve the above issues. (1) We propose a new model to make full use of different features from details to semantic ones for better detection of small and occluded objects. The proposed model emphasizes more on the effectiveness of semantic and contextual information from features produced in high-level layers. (2) To achieve more efficient learning, we propose the near-orthogonality regularization, which takes the neuron redundancy into consideration, to generate better deep learning models. (3) We are currently working on tightening the object localization by integrating the localization score into a non-maximum suppression (NMS) to achieve more accurate detection results, and on the domain adaptive learning that encourages the learning models to acquire higher generalization ability of domain transfer. 

 


MAHDI JAFARISHIADEH

New Topology and Improved Control of Modular Multilevel Based Converters

When & Where:


2001 B Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
James Stiles
Xiaoli (Laura) Li

Abstract

Trends toward large-scale integration and the high-power application of green energy resources necessitate the advent of efficient power converter topologies, multilevel converters. Multilevel inverters are effective solutions for high power and medium voltage DC-to-AC conversion due to their higher efficiency, provision of system redundancy, and generation of near-sinusoidal output voltage waveform. Recently, modular multilevel converter (MMC) has become increasingly attractive. To improve the harmonic profile of the output voltage, there is the need to increase the number of output voltage levels. However, this would require increasing the number of submodules (SMs) and power semi-conductor devices and their associated gate driver and protection circuitry, resulting in the overall multilevel converter to be complex and expensive. Specifically, the need for large number of bulky capacitors in SMs of conventional MMC is seen as a major obstacle. This work proposes an MMC-based multilevel converter that provides the same output voltage as conventional MMC but has reduced number of bulky capacitors. This is achieved by introduction of an extra middle arm to the conventional MMC. Due to similar dynamic equations of the proposed converter with conventional MMC, several previously developed control methods for voltage balancing in the literature for conventional MMCs are applicable to the proposed MMC with minimal effort. Comparative loss analysis of the conventional MMC and the proposed multilevel converter under different power factors and modulation indexes illustrates the lower switching loss of proposed MMC. In addition, a new voltage balancing technique based on carrier-disposition pulse width modulation for modular multilevel converter is proposed.

The second part of this work focuses on an improved control of MMC-based high-power DC/DC converters. Medium-voltage DC (MVDC) and high-voltage DC (HVDC) grids have been the focus of numerous research studies in recent years due to their increasing applications in rapidly growing grid-connected renewable energy systems, such as wind and solar farms. MMC-based DC/DC converters are employed for collecting power from renewable energy sources. Among various developed DC/DC converter topologies, MMC-based DC/DC converter with medium-frequency (MF) transformer is a valuable topology due to its numerous advantages. Specifically, they offer a significant reduction in the size of the MMC arm capacitors along with the ac-link transformer and arm inductors due to the ac-link transformer operating at medium frequencies. As such, this work focuses on improving the control of isolated MMC-based DC/DC (IMMDC) converters. The single phase shift (SPS) control is a popular method in IMMDC converter to control the power transfer. This work proposes conjoined phase shift-amplitude ratio index (PSAR) control that considers amplitude ratio indexes of MMC legs of MF transformer’s secondary side as additional control variables. Compared with SPS control, PSAR control not only provides wider transmission power range and enhances operation flexibility of converter, but also reduces current stress of medium-frequency transformer and power switches of MMCs. An algorithm is developed for simple implementation of the PSAR control to work at the least current stress operating point. Hardware-in-the-loop results confirm the theoretical outcomes of the proposed control method.


Luyao Shang

Memory Based Luby Transform Codes for Delay Sensitive Communication Systems

When & Where:


246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Taejoon Kim
David Petr
Tyrone Duncan

Abstract

As the upcoming fifth-generation (5G) and future wireless network is envisioned in areas such as augmented and virtual reality, industrial control, automated driving or flying, robotics, etc, the requirement of supporting ultra-reliable low-latency communications (URLLC) is increasingly urgent than ever. From the channel coding perspective, URLLC requires codewords being transported in finite block-lengths. In this regards, we propose novel encoding algorithms and analyze their performance behaviors for the finite-length Luby transform (LT) codes.

Luby transform (LT) codes, the first practical realization and the fundamental core of fountain codes, play a key role in the fountain codes family. Recently, researchers show that the performance of LT codes for finite block-lengths can be improved by adding memory into the encoder. However, this work only utilizes one memory, leaving the possibilities of exploiting and how to exploiting more memories an open problem. To explore this unknown, this proposed research targets to 1) propose an encoding algorithm to utilize one more memory and compare its performance with the existing work; 2) generalize the memory based encoding method to arbitrary memory orders and mathematically analyze its performance; 3) find out the optimal memory order in terms of bit error rate (BER), frame error rate (FER), and decoding convergence speed; 4) Apply the memory based encoding algorithm to additive white Gaussian noise (AWGN) channels and analyze its performance.


Saleh Mohamed Eshtaiwi

A New Three Phase Photovoltaic Energy Harvesting System for Generation of Balanced Voltages in Presence of Partial Shading, Module Mismatch, and Unequal Maximum Power Points

When & Where:


2001 B Eaton Hall

Committee Members:

Reza Ahmadi , Chair
Christopher Allen
Jerzy Grzymala-Busse
Rongqing Hui
Elaina Sutley

Abstract

The worldwide energy demand is growing quickly, with an anticipated rate of growth of 48% from 2012 to 2040. Consequently, investments in all forms of renewable energy generation systems have been growing rapidly. Increased use of clean renewable energy resources such as hydropower, wind, solar, geothermal, and biomass is expected to noticeably renewable energy resources alleviate many present environmental concerns associated with fossil fuel-based energy generation.  In recent years, wind and solar energies are gained the most attention among all other renewable resources. As a result, both have become the target of extensive research and development for dynamic performance optimization, cost reduction, and power reliability assurance.  

The performance of Photovoltaic (PV) systems is highly affected by environmental and ambient conditions such as irradiance fluctuations and temperature swings. Furthermore, the initial capital cost for establishing the PV infrastructure is very high. Therefore, its essential that the PV systems always harvest the maximum energy possible by operating at the most efficient operating point, i.e. Maximum Power Point (MPP), to increase conversion efficiency and thus result in lowest cost of captured energy.

The dissertation is an effort to develop a new PV conversion system for large scale PV grid-connected systems which provides efficacy enhancements compared to conventional systems by balancing voltage mismatches between the PV modules. Hence, it analyzes the theoretical models for three selected DC/DC converters. To accomplish this goal, this work first introduces a new adaptive maximum PV energy extraction technique for PV grid-tied systems. Then, it supplements the proposed technique with a global search approach to distinguish absolute maximum power peaks within multi-peaks in case of partially shaded PV module conditions. Next, it proposes an adaptive MPP tracking (MPPT) strategy based on the concept of model predictive control (MPC) in conjunction with a new current sensor-less approach to reduce the number of required sensors in the system.  Finally, this work proposes a power balancing technique for injection of balanced three-phase power into the grid using a Cascaded H-Bridge (CHB) converter topology which brings together the entire system and results in the final proposed PV power system. The resulting PV system offers enhanced reliability by guaranteeing effective system operation under unbalanced phase voltages caused by severe partial shading.

The developed grid connected PV solar system is evaluated using simulations under realistic dynamic ambient conditions, partial shading, and fully shading conditions and the obtained results confirm its effectiveness and merits comparted to conventional systems.


Shruti Goel

DDoS Intrusion Detection using Machine Learning Techniques

When & Where:


250 Nichols Hall

Committee Members:

Alex Bardas, Chair
Fengjun Li
Bo Luo


Abstract

Organizations are becoming more exposed to security threats due to shift towards cloud infrastructure and IoT devices. One growing category of cyber threats is Distributes Denial of Service (DDoS) attacks. It is hard to detect DDoS attacks due to evolving attack patterns and increasing data volume. So, creating filter rules manually to distinguish between legitimate and malicious traffic is a complex task. Current work explores a supervised machine learning based approach for DDoS detection. The proposed model uses a step forward feature selection method to extract 15 best network features and random forest classifier for detecting DDoS traffic. This solution can be used as an automatic detection algorithm for DDoS mitigation pipelines implemented in the most up-to-date DDoS security solutions.


Hayder Almosa

Downlink Achievable Rate Analysis for FDD Massive MIMO Systems

When & Where:


129 Nichols Hall

Committee Members:

Erik Perrins , Chair
Lingjia Liu
Shannon Blunt
Rongqing Hui
Hongyi Cai

Abstract

Multiple-Input Multiple-Output (MIMO) systems with large-scale transmit antenna arrays, often called massive MIMO, are a very promising direction for 5G due to their ability to increase capacity and enhance both spectrum and energy efficiency. To get the benefit of massive MIMO systems, accurate downlink channel state information at the transmitter (CSIT) is essential for downlink beamforming and resource allocation. Conventional approaches to obtain CSIT for FDD massive MIMO systems require downlink training and CSI feedback. However, such training will cause a large overhead for massive MIMO systems because of the large dimensionality of the channel matrix. In this dissertation, we improve the performance of FDD massive MIMO networks in terms of downlink training overhead reduction, by designing an efficient downlink beamforming method and developing a new algorithm to estimate the channel state information based on compressive sensing techniques. First, we design an efficient downlink beamforming method based on partial CSI. By exploiting the relationship between uplink direction of arrivals (DoAs) and downlink direction of departures (DoDs), we derive an expression for estimated downlink DoDs, which will be used for downlink beamforming. Second, By exploiting the sparsity structure of downlink channel matrix, we develop an algorithm that selects the best features from the measurement matrix to obtain efficient CSIT acquisition that can reduce the downlink training overhead compared with conventional LS/MMSE estimators. In both cases, we compare the performance of our proposed beamforming method with traditional methods in terms of downlink achievable rate and simulation results show that our proposed method outperform the traditional beamforming methods.​


Naresh Kumar Sampath Kumar

Complexity of Rules Sets in Mining Incomplete Data Using Characteristic Sets and Generalized Maximal Consistent Blocks

When & Where:


2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang


Abstract

The process of going through data to discover hidden connections and predict future trends has a long history. In this data-driven world, data mining is an important process to extract knowledge or insights from data in various forms. It explores the unknown credible patterns which are significant in solving many problems. There are quite a few techniques in data mining including classification, clustering, and prediction. We will discuss the classification, by using a technique called rule induction using four different approaches.

We compare the complexity of rule sets induced using characteristic sets and maximal consistent blocks. The complexity of rule sets is determined by the total number of rules induced for a given data set and the total number of conditions present in each rule. We used Incomplete Data sets to induce rules. These data sets have missing attribute values. Both methods were implemented and analyzed to check how it influences the complexity. Preliminary results suggest that the choice between characteristic sets and generalized maximal consistent blocks is inconsequential. But the cardinality of the rule sets is always smaller for incomplete data sets with “do not care” conditions. Thus, the choice between interpretations of the missing attribute value is more important than the choice between characteristic sets and generalized maximal consistent blocks.