Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Qua Nguyen

Hybrid Array and Privacy-Preserving Signaling Optimization for NextG Wireless Communications

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for link.

Committee Members:

Erik Perrins, Chair
Morteza Hashemi
Zijun Yao
Taejoon Kim
KC Kong

Abstract

This PhD research tackles two critical challenges in NextG wireless networks: hybrid precoder design for wideband sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and privacy-preserving federated learning (FL) over wireless networks.

In the first part, we propose a novel hybrid precoding framework that integrates true-time delay (TTD) devices and phase shifters (PS) to counteract the beam squint effect - a significant challenge in the wideband sub-THz massive MIMO systems that leads to considerable loss in array gain. Unlike previous methods that only designed TTD values while fixed PS values and assuming unbounded time delay values, our approach jointly optimizes TTD and PS values under realistic time delays constraint. We determine the minimum number of TTD devices required to achieve a target array gain using our proposed approach. Then, we extend the framework to multi-user wideband systems and formulate a hybrid array optimization problem aiming to maximize the minimum data rate across users. This problem is decomposed into two sub-problems: fair subarray allocation, solved via continuous domain relaxation, and subarray gain maximization, addressed via a phase-domain transformation.

The second part focuses on preserving privacy in FL over wireless networks. First, we design a differentially-private FL algorithm that applies time-varying noise variance perturbation. Taking advantage of existing wireless channel noise, we jointly design differential privacy (DP) noise variances and users transmit power to resolve the tradeoffs between privacy and learning utility. Next, we tackle two critical challenges within FL networks: (i) privacy risks arising from model updates and (ii) reduced learning utility due to quantization heterogeneity. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. We approach to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that ensures a DP guarantee and minimal quantization distortion. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Lastly, inspired by the information-theoretic rate-distortion framework, a privacy-distortion tradeoff problem is formulated to minimize privacy loss under a given maximum allowable quantization distortion. The optimal solution to this problem is identified, revealing that the privacy loss decreases as the maximum allowable quantization distortion increases, and vice versa.

This research advances hybrid array optimization for wideband sub-THz massive MIMO and introduces novel algorithms for privacy-preserving quantized FL with diverse precision. These contributions enable high-throughput wideband MIMO communication systems and privacy-preserving AI-native designs, aligning with the performance and privacy protection demands of NextG networks.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

As machine learning (ML), artificial intelligence (AI), and deep learning continue to advance, their applications become more diverse – one such application is synthetic aperture radar (SAR) automatic target recognition (ATR). These SAR ATR networks use different forms of deep learning such as convolutional neural networks (CNN) to classify targets in SAR imagery. An emerging research area of SAR is dual function radar communication (DFRC) which performs both radar and communications functions using a single co-designed modulation. The utilization of DFRC emissions for SAR imaging impacts image quality, thereby influencing SAR ATR network training. Here, using the Civilian Vehicle Data Dome dataset from the AFRL, SAR ATR networks are trained and evaluated with simulated data generated using Gaussian Minimum Shift Keying (GMSK) and Linear Frequency Modulation (LFM) waveforms. The networks are used to compare how the target classification accuracy of the ATR network differ between DFRC (i.e., GMSK) and baseline (i.e., LFM) emissions. Furthermore, as is common in pulse-agile transmission structures, an effect known as ’range sidelobe modulation’ is examined, along with its impact on SAR ATR. Finally, it is shown that SAR ATR network can be trained for GMSK emissions using existing LFM datasets via two types of data augmentation.


Past Defense Notices

Dates

Charles Mohr

Multi-Objective Optimization of FM Noise Waveforms via Generalized Frequency Template Error Metrics

When & Where:


129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Christopher Allen
James Stiles


Abstract

FM noise waveforms have been experimentally demonstrated to achieve high time bandwidth products and low autocorrelation sidelobes while achieving acceptable spectral containment in physical implementation. Still, it may be necessary to further reduce sidelobe levels for detection or improve spectral containment in the face of growing spectral use. The Frequency Template Error (FTE) and the Logarithmic Frequency Template Error (Log-FTE) metrics were conceived as means to achieve FM noise waveforms with good spectral containment and good autocorrelation sidelobes. In practice, FTE based waveform optimizations have been found to produce better autocorrelation responses at the expense of spectral containment while Log-FTE optimizations achieve excellent spectral containment and interference rejection at the expense of autocorrelation sidelobe levels. In this work, the notion of the FTE and Log-FTE metrics are considered as subsets of a broader class of frequency domain metrics collectively termed as the Generalized Frequency Template Error (GFTE). In doing so, many different P-norm based variations of the FTE and Log-FTE cost functions are extensively examined and applied via gradient descent methods to optimize polyphase-coded FM (PCFM) waveforms. The performance of the different P-norm variations of the FTE and Log-FTE cost functions are compared amongst themselves, against each other, and relative to a previous FM noise waveform design approach called Pseudo-Random Optimized FM (PRO-FM). They are evaluated in terms of their autocorrelation sidelobes, spectral containment, and their ability to realize spectral notches within the 3 dB bandwidth for the purpose of interference rejection. These comparisons are performed in both simulation and experimentally in loopback where it was found that P-norm values of 2 tend to provide the best optimization performance for both the FTE and Log-FTE optimizations except in the case of the Log-FTE optimization of a notched spectral template where a P-norm value of 3 provides the best results. In general, the FTE and Log-FTE cost functions as subsets of the GFTE provide diverse means to optimize physically robust FM noise waveforms while emphasizing different performance criteria in terms of autocorrelation sidelobes, spectral containment, and interference rejection.


Rui Cao

How good Are Probabilistic Approximations for Rule Induction from Data with Missing Attribute Values

When & Where:


246 Nichols Hall

Committee Members:

Jerzy Grzymala-Busse , Chair
Guanghui Wang
Cuncong Zhong


Abstract

In data mining, decision rules induced from known examples are used to classify unseen cases. There are various rule induction algorithms, such as LEM1 (Learning from Examples Module version 1), LEM2 (Learning from Examples Module version 2) and MLEM2 (Modified Learning from Examples Module version 2). In the real world, many data sets are imperfect, may be incomplete. The idea of the probabilistic approximation, has been used for many years in variable precision rough set models and similar approaches to uncertainty. The objective of this project is to test whether proper probabilistic approximations are better than concept lower and upper approximations. In this project, experiments were conducted on six incomplete data sets with lost values. We implemented the local probabilistic version of MLEM2 algorithm to induce certain and possible rules from incomplete data sets. A program called Rule Checker was also developed to classify unseen cases with induced rules and measure the classification error rate. Hold-out validation was carried out and the error rate was used as the criterion for comparison. 


Lokesh Kaki

An Automatic Image Stitching Software with Customizable Parameters and a Graphical User Interface

When & Where:


2001 B Eaton Hall

Committee Members:

Richard Wang, Chair
Esam El-Araby
Jerzy Grzymala-Busse


Abstract

Image stitching is one of the most widely used Computer Vision algorithms with a broad range of applications, such as image stabilization, high-resolution photomosaics, object insertion, 3D image reconstruction, and satellite imaging. The process of extracting image features from each input image,  determining the image matches, and then estimating the homography for each matched image is the necessary procedure for most of the feature-based image stitching techniques. In recent years, several state-of-the-art techniques like scale-invariant feature transform (SIFT), random sample consensus (RANSAC), and direct linear transformation (DLT) have been proposed for feature detection, extraction, matching, and homography estimation. However, using these algorithms with fixed parameters does not usually work well in creating seamless, natural-looking panoramas. The set of parameter values which work best for specific images may not work equally well for another set of images taken by a different camera or in varied conditions. Hence, the parameter tuning is as important as choosing the right set of algorithms for the efficient performance of any image stitching algorithm.

In this project, a graphical user interface is designed and programmed to tune a total of 32 parameters, including some of the basic ones such as straitening, cropping, setting the maximum output image size, and setting the focal length.  It also contains several advanced parameters like specifying the number of RANSAC iterations, RANSAC inlier threshold, extrema threshold, Gaussian window size, etc. The image stitching algorithm used in this project comprises of SIFT, DLT, RANSAC, warping, straightening, bundle adjustment, and blending techniques. Once the given images are stitched together, the output image can be further analyzed inside the user interface by clicking on any particular point. Then, it returns the corresponding input image, which contributed to the selected point, and its GPS coordinates, altitude, and camera focal length given by its metadata. The developed software has been successfully tested on various diverse datasets, and the customized parameters with corresponding results, as well as timer logs are tabulated in this report. The software is built for both Windows and Linux operating systems as part of this project.

 


Mohammad Isyroqi Fathan

Comparative Study on Polyp Localization and Classification on Colonoscopy Video

When & Where:


250 Nichols Hall

Committee Members:

Guanghui Wang, Chair
Bo Luo
James Miller


Abstract

Colorectal cancer is one of the most common types of cancer with a high mortality rate. It typically develops from small clumps of benign cells called polyp. The adenomatous polyp has a higher chance of developing into cancer compared to the hyperplastic polyp. Colonoscopy is the preferred procedure for colorectal cancer screening and to minimize its risk by performing a biopsy on found polyps. Thus, a good polyp detection model can assist physicians and increase the effectiveness of colonoscopy. Several models using handcrafted features and deep learning approaches have been proposed for the polyp detection task.  

In this study, we compare the performances of the previous state-of-the-art general object detection models for polyp detection and classification (into adenomatous and hyperplastic class).  Specifically, we compare the performances of FasterRCNN, SSD, YOLOv3, RefineDet, RetinaNet, and FasterRCNN with DetNet backbone. This comparative study serves as an initial analysis of the effectiveness of these models and to choose a base model that we will improve further for polyp detection.


Lei Wang

I Know What You Type on Your Phone: Keystroke Inference on Android Device Using Deep Learning

When & Where:


246 Nichols Hall

Committee Members:

Bo Luo, Chair
Fengjun Li
Guanghui Wang


Abstract

Given a list of smartphone sensor readings, such as accelerometer, gyroscope and light sensor, is there enough information present to predict a user’s input without access to either the raw text or keyboard log? With the increasing usage of smartphones as personal devices to access sensitive information on-the-go has put user privacy at risk. As the technology advances rapidly, smartphones now equip multiple sensors to measure user motion, temperature and brightness to provide constant feedback to applications in order to receive accurate and current weather forecast, GPS information and so on. In the ecosystem of Android, sensor reading can be accessed without user permissions and this makes Android devices vulnerable to various side-channel attacks.

In this thesis, we first create a native Android app to collect approximately 20700 keypresses from 30 volunteers. The text used for the data collection is carefully selected based on the bigram analysis we run on over 1.3 million tweets. We then present two approaches (single key press and bigram) for feature extraction, those features are constructed using accelerometer, gyroscope and light sensor readings. A deep neural network with four hidden layers is proposed as the baseline for this work, which achieves an accuracy of 47% using categorical cross entropy as the accuracy metric. A multi-view model then is proposed in the later work and multiple views are extracted and performance of the combination of each view is compared for analysis.


Wenchi Ma

Deep Neural Network based Object Detection and Regularization in Deep Learning

When & Where:


246 Nichols Hall

Committee Members:

Richard Wang, Chair
Arvin Agah
Bo Luo
Heechul Yun
Haiyang Chao

Abstract

The abilities of feature learning, scene understanding, and task generalization are the consistent pursuit in deep learning-based computer vision. A number of object detectors with various network structures and algorithms have been proposed to learn more effective features, to extract more contextual and semantic information, and to achieve more robust and more accurate performance on different datasets. Nevertheless, the problem is still not well addressed in practical applications. One major issue lies in the inefficient feature learning and propagation in challenging situations like small objects, occlusion, illumination, etc. Another big issue is the poor generalization ability on datasets with different feature distribution. 

The study aims to explore different learning frameworks and strategies to solve the above issues. (1) We propose a new model to make full use of different features from details to semantic ones for better detection of small and occluded objects. The proposed model emphasizes more on the effectiveness of semantic and contextual information from features produced in high-level layers. (2) To achieve more efficient learning, we propose the near-orthogonality regularization, which takes the neuron redundancy into consideration, to generate better deep learning models. (3) We are currently working on tightening the object localization by integrating the localization score into a non-maximum suppression (NMS) to achieve more accurate detection results, and on the domain adaptive learning that encourages the learning models to acquire higher generalization ability of domain transfer. 

 


MAHDI JAFARISHIADEH

New Topology and Improved Control of Modular Multilevel Based Converters

When & Where:


2001 B Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
James Stiles
Xiaoli (Laura) Li

Abstract

Trends toward large-scale integration and the high-power application of green energy resources necessitate the advent of efficient power converter topologies, multilevel converters. Multilevel inverters are effective solutions for high power and medium voltage DC-to-AC conversion due to their higher efficiency, provision of system redundancy, and generation of near-sinusoidal output voltage waveform. Recently, modular multilevel converter (MMC) has become increasingly attractive. To improve the harmonic profile of the output voltage, there is the need to increase the number of output voltage levels. However, this would require increasing the number of submodules (SMs) and power semi-conductor devices and their associated gate driver and protection circuitry, resulting in the overall multilevel converter to be complex and expensive. Specifically, the need for large number of bulky capacitors in SMs of conventional MMC is seen as a major obstacle. This work proposes an MMC-based multilevel converter that provides the same output voltage as conventional MMC but has reduced number of bulky capacitors. This is achieved by introduction of an extra middle arm to the conventional MMC. Due to similar dynamic equations of the proposed converter with conventional MMC, several previously developed control methods for voltage balancing in the literature for conventional MMCs are applicable to the proposed MMC with minimal effort. Comparative loss analysis of the conventional MMC and the proposed multilevel converter under different power factors and modulation indexes illustrates the lower switching loss of proposed MMC. In addition, a new voltage balancing technique based on carrier-disposition pulse width modulation for modular multilevel converter is proposed.

The second part of this work focuses on an improved control of MMC-based high-power DC/DC converters. Medium-voltage DC (MVDC) and high-voltage DC (HVDC) grids have been the focus of numerous research studies in recent years due to their increasing applications in rapidly growing grid-connected renewable energy systems, such as wind and solar farms. MMC-based DC/DC converters are employed for collecting power from renewable energy sources. Among various developed DC/DC converter topologies, MMC-based DC/DC converter with medium-frequency (MF) transformer is a valuable topology due to its numerous advantages. Specifically, they offer a significant reduction in the size of the MMC arm capacitors along with the ac-link transformer and arm inductors due to the ac-link transformer operating at medium frequencies. As such, this work focuses on improving the control of isolated MMC-based DC/DC (IMMDC) converters. The single phase shift (SPS) control is a popular method in IMMDC converter to control the power transfer. This work proposes conjoined phase shift-amplitude ratio index (PSAR) control that considers amplitude ratio indexes of MMC legs of MF transformer’s secondary side as additional control variables. Compared with SPS control, PSAR control not only provides wider transmission power range and enhances operation flexibility of converter, but also reduces current stress of medium-frequency transformer and power switches of MMCs. An algorithm is developed for simple implementation of the PSAR control to work at the least current stress operating point. Hardware-in-the-loop results confirm the theoretical outcomes of the proposed control method.


Luyao Shang

Memory Based Luby Transform Codes for Delay Sensitive Communication Systems

When & Where:


246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Taejoon Kim
David Petr
Tyrone Duncan

Abstract

As the upcoming fifth-generation (5G) and future wireless network is envisioned in areas such as augmented and virtual reality, industrial control, automated driving or flying, robotics, etc, the requirement of supporting ultra-reliable low-latency communications (URLLC) is increasingly urgent than ever. From the channel coding perspective, URLLC requires codewords being transported in finite block-lengths. In this regards, we propose novel encoding algorithms and analyze their performance behaviors for the finite-length Luby transform (LT) codes.

Luby transform (LT) codes, the first practical realization and the fundamental core of fountain codes, play a key role in the fountain codes family. Recently, researchers show that the performance of LT codes for finite block-lengths can be improved by adding memory into the encoder. However, this work only utilizes one memory, leaving the possibilities of exploiting and how to exploiting more memories an open problem. To explore this unknown, this proposed research targets to 1) propose an encoding algorithm to utilize one more memory and compare its performance with the existing work; 2) generalize the memory based encoding method to arbitrary memory orders and mathematically analyze its performance; 3) find out the optimal memory order in terms of bit error rate (BER), frame error rate (FER), and decoding convergence speed; 4) Apply the memory based encoding algorithm to additive white Gaussian noise (AWGN) channels and analyze its performance.


Saleh Mohamed Eshtaiwi

A New Three Phase Photovoltaic Energy Harvesting System for Generation of Balanced Voltages in Presence of Partial Shading, Module Mismatch, and Unequal Maximum Power Points

When & Where:


2001 B Eaton Hall

Committee Members:

Reza Ahmadi , Chair
Christopher Allen
Jerzy Grzymala-Busse
Rongqing Hui
Elaina Sutley

Abstract

The worldwide energy demand is growing quickly, with an anticipated rate of growth of 48% from 2012 to 2040. Consequently, investments in all forms of renewable energy generation systems have been growing rapidly. Increased use of clean renewable energy resources such as hydropower, wind, solar, geothermal, and biomass is expected to noticeably renewable energy resources alleviate many present environmental concerns associated with fossil fuel-based energy generation.  In recent years, wind and solar energies are gained the most attention among all other renewable resources. As a result, both have become the target of extensive research and development for dynamic performance optimization, cost reduction, and power reliability assurance.  

The performance of Photovoltaic (PV) systems is highly affected by environmental and ambient conditions such as irradiance fluctuations and temperature swings. Furthermore, the initial capital cost for establishing the PV infrastructure is very high. Therefore, its essential that the PV systems always harvest the maximum energy possible by operating at the most efficient operating point, i.e. Maximum Power Point (MPP), to increase conversion efficiency and thus result in lowest cost of captured energy.

The dissertation is an effort to develop a new PV conversion system for large scale PV grid-connected systems which provides efficacy enhancements compared to conventional systems by balancing voltage mismatches between the PV modules. Hence, it analyzes the theoretical models for three selected DC/DC converters. To accomplish this goal, this work first introduces a new adaptive maximum PV energy extraction technique for PV grid-tied systems. Then, it supplements the proposed technique with a global search approach to distinguish absolute maximum power peaks within multi-peaks in case of partially shaded PV module conditions. Next, it proposes an adaptive MPP tracking (MPPT) strategy based on the concept of model predictive control (MPC) in conjunction with a new current sensor-less approach to reduce the number of required sensors in the system.  Finally, this work proposes a power balancing technique for injection of balanced three-phase power into the grid using a Cascaded H-Bridge (CHB) converter topology which brings together the entire system and results in the final proposed PV power system. The resulting PV system offers enhanced reliability by guaranteeing effective system operation under unbalanced phase voltages caused by severe partial shading.

The developed grid connected PV solar system is evaluated using simulations under realistic dynamic ambient conditions, partial shading, and fully shading conditions and the obtained results confirm its effectiveness and merits comparted to conventional systems.


Shruti Goel

DDoS Intrusion Detection using Machine Learning Techniques

When & Where:


250 Nichols Hall

Committee Members:

Alex Bardas, Chair
Fengjun Li
Bo Luo


Abstract

Organizations are becoming more exposed to security threats due to shift towards cloud infrastructure and IoT devices. One growing category of cyber threats is Distributes Denial of Service (DDoS) attacks. It is hard to detect DDoS attacks due to evolving attack patterns and increasing data volume. So, creating filter rules manually to distinguish between legitimate and malicious traffic is a complex task. Current work explores a supervised machine learning based approach for DDoS detection. The proposed model uses a step forward feature selection method to extract 15 best network features and random forest classifier for detecting DDoS traffic. This solution can be used as an automatic detection algorithm for DDoS mitigation pipelines implemented in the most up-to-date DDoS security solutions.