• Home
  • Research
  • Defense Notices

 

All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

UPCOMING DEFENSE NOTICES

Chinmay Ratnaparkhi - A comparison of data mining based on a single local probabilistic approximation and the MLEM2 algorithm
MS Project Defense(CS)

When & Where:

September 4, 2019 - 10:00 AM
2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Fengjun Li
Bo Luo

Abstract

Observational data produced in scientific experimentation and in day to day life is a valuable source of information for research. It can be challenging to extract meaningful inferences from large amounts of data. Data mining offers many algorithms to draw useful inferences from large pools of information based on observable patterns.


In this project I have implemented one such data mining algorithm for determining a single local probabilistic approximation, which also computes the corresponding ruleset; and compared it with two versions of the MLEM2 algorithm which induce a certain rule set and a possible rule set respectively. For experimentation, eight data sets with 35% missing values were used to induce corresponding rulesets and classify unseen cases. Two different interpretations of missing values were used, namely, lost values and do not care conditions. k-fold cross validation technique was employed with k=10 to identify error rates in classification. 

The goal of this project was to compare how accurately unseen cases are classified by the rulesets induced by each of the aforementioned algorithms. Error rate calculated from the k-fold cross validation technique was also used to observe how each type of interpretation of missing values affects the ruleset.

 


 

PAST DEFENSE NOTICES


Govind Vedala - Digital Compensation of Transmission Impairments in Multi-Subcarrier Fiber Optic Transmission Systems

When & Where:

August 22, 2019 - 2:00 PM
246 Nichols Hall

Committee Members:

Ron Hui, Chair
Christopher Allen
Erik Perrins
Alessandro Salandrino
Carey Johnson

Abstract

Time and again, fiber optic medium has proved to be the best means for transporting global data traffic which is following an exponential growth trajectory. Rapid development of high bandwidth applications since the past decade based on virtual reality, 5G and big data to name a few have resulted in a sudden surge of research activities across the globe to maximize effective utilization of available fiber bandwidth which until then was supporting low speed services like voice and low bandwidth data traffic. To this end, higher order modulation formats together with multi-subcarrier superchannel based fiber optic transmission systems have proved to enhance spectral efficiency and achieve multi terabit per second data rates. However, spectrally efficient systems are extremely sensitive to transmission impairments stemming from both optical devices and fiber itself. Therefore, such systems mandate the use of robust digital signal processing (DSP) to compensate and/or mitigate the undesired artifacts, thereby extending the transmission reach. The central theme of this dissertation is to propose and validate few efficient DSP techniques to compensate specific impairments as delineated in the next three paragraphs.
For short reach applications, we experimentally demonstrate a digital compensation technique to undo semiconductor optical amplifier (SOA) and photodiode nonlinearity effects by digitally backpropagating the received signal through a virtual SOA with inverse gain characteristics followed by an iterative algorithm to cancel signal-signal beat interference arising from photodiode. We characterize the phase dynamics of comb lines from a quantum dot passive mode locked laser based on a novel multiheterodyne coherent detection technique. In the context of multi-subcarrier, Nyquist pulse shaped, superchannel transmission system with coherent detection, we demonstrate through measurements and numerical simulations an efficient phase noise compensation technique called “Digital Mixing” that operates using a shared pilot tone exploiting the mutual phase coherence among the comb lines.
Finally, we propose and experimentally validate a practical pilot aided relative phase noise compensation technique for forward pumped distributed Raman amplified, digital subcarrier multiplexed coherent transmission systems.


Tong Xu - Real-time DSP-enabled digital subcarrier cross-connect (DSXC) for optical communication systems and networks

When & Where:

August 20, 2019 - 10:00 AM
246 Nichols Hall

Committee Members:

Ron Hui, Chair
Christopher Allen
Esam Eldin Aly
Erik Perrins
Jie Han

Abstract

Elastic optical networking (EON) is intended to offer flexible channel wavelength granularity to meet the requirement of high spectral efficiency (SE) in today’s optical networks. However, optical cross-connects (OXC) and switches based on optical wavelength division multiplexing (WDM) are not flexible enough due to the coarse bandwidth granularity imposed by optical filtering. Thus, OXC may not meet the requirements of many applications which require finer bandwidth granularities than that carried by an entire wavelength channel. 

 In order to achieve highly flexible and fine enough bandwidth granularities, electrical digital subcarrier cross-connect (DSXC) can be utilized in EON. As presented in this thesis, my research work focuses on the investigation and implementation of real-time digital signal processing (DSP) enabled DSXC which can dynamically assign both bandwidth and power to each individual sub-wavelength channel, known as subcarrier. This DSXC is based on digital sub-carrier multiplexing (DSCM), which is a frequency division multiplexing (FDM) technique that multiplexes a large number of digitally created subcarriers on each optical wavelength. Compared with OXC based on optical WDM, DSXC based on DSCM has much finer bandwidth granularities and flexibilities for dynamic bandwidth allocation. 

Based on a field programmable gate array (FPGA) hardware platform, we have designed and implemented a real-time DSP enabled DSXC which uses Nyquist FDM as the multiplexing scheme. For the first time, we demonstrated resampling filters for channel selection and frequency translation, which enabled real-time DSXC. This circuit-based DSXC supports flexible and fine data-rate subcarrier channel granularities, offering a low latency data plane, transparency to modulation formats, and the capability of compensating transmission impairments in the digital domain. The experimentally demonstrated 8×8 DSXC makes use of a Virtex-7 FPGA platform, which supports any-to-any switching of eight subcarrier channels with mixed modulation formats and data rates. Digital resampling filters, which enable frequency selections and translations of multiple subcarrier channels, have much lower DSP complexity and reduced FPGA resources requirements (DSP slices used in FPGA) in comparison to the traditional technique based on I/Q mixing and filtering.

We have also investigated the feasibility of using the distributed arithmetic (DA) architecture for real-time DSXC to completely eliminate the need of DSP slices in FPGA implementation. For the first time, we experimentally demonstrated the implementation of real-time frequency translation and channel selection based on the DA architecture in the same FPGA platform. Compared with resampling filters that leverage multipliers, the DA-based approach eliminates the need of DSP slices in the FPGA implementation and significantly reduces the hardware cost. In addition, by requiring the time of only a few clock cycles, a DA-based resampling filter is significantly faster when compared to a conventional FIR filter whose overall latency is proportional to the filter order. The DA-based DSXC is, therefore, able to achieve not only the improved spectral efficiency, programmability of multiple orthogonal subcarrier channels, and low hardware resources requirements, but also much reduced cross-connection latency when implemented in a real-time DSP hardware platform. This reduced latency of cross-connect switching can be critically important for time-sensitive applications such as 5G mobile fronthaul, cloud radio access network (C-RAN), cloud-based robot control, tele-surgery and network gaming.


Levi Goodman - Dual Mode W-Band Radar for Range Finding, Static Clutter Suppression & Moving Target Detection

When & Where:

August 19, 2019 - 10:00 AM
250 Nichols Hall

Committee Members:

Christopher Allen, Chair
Shannon Blunt
James Stiles

Abstract

Many radar applications today require accurate, real-time, unambiguous measurement of target range and radial velocity.  Obstacles that frequently prevent target detection are the presence of noise and the overwhelming backscatter from other objects, referred to as clutter.

In this thesis, a method of static clutter suppression is proposed to increase detectability of moving targets in high clutter environments.  An experimental dual-purpose, single-mode, monostatic FMCW radar, operating at 108 GHz, is used to map the range of stationary targets and determine range and velocity of moving targets.  By transmitting a triangular waveform, which consists of alternating upchirps and downchirps, the received echo signals can be separated into two complementary data sets, an upchirp data set and a downchirp data set.  In one data set, the return signals from moving targets are spectrally isolated (separated in frequency) from static clutter return signals.  The static clutter signals in that first data set are then used to suppress the static clutter in the second data set, greatly improving detectability of moving targets.  Once the moving target signals are recovered from each data set, they are then used to solve for target range and velocity simultaneously.

The moving target of interest for tests performed was a reusable paintball (reball).  Reball range and velocity were accurately measured at distances up to 5 meters and at speeds greater than 90 m/s (200 mph) with a deceleration of approximately 0.155 m/s/ms (meters per second per millisecond).  Static clutter suppression of up to 25 dB was achieved, while moving target signals only suffered a loss of about 3 dB.

 


Ruoting Zheng - Algorithms for Computing Maximal Consistent Blocks

When & Where:

August 16, 2019 - 2:00 PM
2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo

Abstract

Rough set theory is a tool to deal with uncertain and incomplete data. It has been successfully used in classification, machine learning and automated knowledge acquisition. A maximal consistent block defined using rough set theory, is used for rule acquisition.

Maximal consistent block technique is applied to acquire knowledge in incomplete data sets by analyzing the structure of a similarity class. 

The main objective of this project is to implement and compare the algorithms for computing the maximal consistent blocks. The brute force method, recursive method and hierarchical method were designed for the data sets with missing attribute values interpreted only as “do not care” conditions. In this project, we extend these algorithms so they can be applied to arbitrary interpretations of missing attribute values, and an approach for computing maximal consistent blocks on the data sets with lost values is introduced in this project. Besides, we found that the brute force method and recursive method have problems dealing with the data sets for which characteristic sets are not transitive, so the limitations of the algorithms and a simplified recursive method are provided in the project as well.


Hao Xue - Trust and Credibility in Online Social Networks

When & Where:

July 12, 2019 - 9:00 AM
246 Nichols Hall

Committee Members:

Fengjun Li, Chair
Prasad Kulkarni
Bo Luo
Cuncong Zhong
Mei Liu

Abstract

Increasing portions of people's social and communicative activities now take place in the digital world. The growth and popularity of online social networks (OSNs) have tremendously facilitate the online interaction and information exchange. Not only normal users benefit from OSNs as more people now rely online information for news, opinions, and social networking, but also companies and business owners who utilize OSNs as platforms for gathering feedback and marketing activities. As OSNs enable people to communicate more effectively, a large volume of user  generated content (UGC) is produced daily. However, the freedom and ease of of publishing information online has made these systems no longer the sources of reliable information. Not only does biased and misleading information exist, financial incentives drive individual and professional spammers to insert deceptive content and promote harmful information, which jeopardizes the ecosystems of OSNs.
In this dissertation, we present our work of measuring the credibility of information and detect content polluters in OSNs. Firstly, we assume that review spammers spend less effort in maintain social connections and propose to utilize the social relationships and rating deviations to assist the computation of trustworthiness of users. Compared to numeric ratings, textual content contains richer information about the actual opinion of a user toward a target. Thus, we propose a content-based trust propagation framework by extracting the opinions expressed in review content. In addition, we discover that the surrounding network around a user could also provide valuable information about the user himself. Lastly, we study the problem of detecting social bots by utilizing the characteristics of surrounding neighborhood networks.


Casey Sader - Taming WOLF: Building a More Functional and User-Friendly Framework

When & Where:

June 12, 2019 - 10:00 AM
2001 B Eaton Hall

Committee Members:

Michael Branicky , Chair
Bo Luo
Suzanne Shontz

Abstract

Machine learning is all about automation. Many tools have been created to help data scientists automate repeated tasks and train models. These tools require varying levels of user experience to be used effectively. The ``machine learning WOrk fLow management Framework" (WOLF) aims to automate the machine learning pipeline. One of its key uses is to discover which machine learning model and hyper-parameters are the best configuration for a dataset. In this project, features were explored that could be added to make WOLF behave as a full pipeline in order to be helpful for novice and experienced data scientists alike. One feature to make WOLF more accessible is a website version that can be accessed from anywhere and make using WOLF much more intuitive. To keep WOLF aligned with the most recent trends and models, the ability to train a neural network using the TensorFlow framework and Keras library were added. This project also introduced the ability to pickle and save trained models. Designing the option for using the models to make predictions within the WOLF framework on another collection of data is a fundamental side-effect of saving the models. Understanding how the model makes predictions is a beneficial component of machine learning. This project aids in that understanding by calculating and reporting the relative importance of the dataset features for the given model. Incorporating all these additions to WOLF makes it a more functional and user-friendly framework for machine learning tasks.

 


Charles Mohr - Multi-Objective Optimization of FM Noise Waveforms via Generalized Frequency Template Error Metrics

When & Where:

June 6, 2019 - 10:00 AM
129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Christopher Allen
James Stiles

Abstract

FM noise waveforms have been experimentally demonstrated to achieve high time bandwidth products and low autocorrelation sidelobes while achieving acceptable spectral containment in physical implementation. Still, it may be necessary to further reduce sidelobe levels for detection or improve spectral containment in the face of growing spectral use. The Frequency Template Error (FTE) and the Logarithmic Frequency Template Error (Log-FTE) metrics were conceived as means to achieve FM noise waveforms with good spectral containment and good autocorrelation sidelobes. In practice, FTE based waveform optimizations have been found to produce better autocorrelation responses at the expense of spectral containment while Log-FTE optimizations achieve excellent spectral containment and interference rejection at the expense of autocorrelation sidelobe levels. In this work, the notion of the FTE and Log-FTE metrics are considered as subsets of a broader class of frequency domain metrics collectively termed as the Generalized Frequency Template Error (GFTE). In doing so, many different P-norm based variations of the FTE and Log-FTE cost functions are extensively examined and applied via gradient descent methods to optimize polyphase-coded FM (PCFM) waveforms. The performance of the different P-norm variations of the FTE and Log-FTE cost functions are compared amongst themselves, against each other, and relative to a previous FM noise waveform design approach called Pseudo-Random Optimized FM (PRO-FM). They are evaluated in terms of their autocorrelation sidelobes, spectral containment, and their ability to realize spectral notches within the 3 dB bandwidth for the purpose of interference rejection. These comparisons are performed in both simulation and experimentally in loopback where it was found that P-norm values of 2 tend to provide the best optimization performance for both the FTE and Log-FTE optimizations except in the case of the Log-FTE optimization of a notched spectral template where a P-norm value of 3 provides the best results. In general, the FTE and Log-FTE cost functions as subsets of the GFTE provide diverse means to optimize physically robust FM noise waveforms while emphasizing different performance criteria in terms of autocorrelation sidelobes, spectral containment, and interference rejection.


Rui Cao - How good Are Probabilistic Approximations for Rule Induction from Data with Missing Attribute Values

When & Where:

May 31, 2019 - 2:00 PM
246 Nichols Hall

Committee Members:

Jerzy Grzymala-Busse , Chair
Guanghui Wang
Cuncong Zhong

Abstract

In data mining, decision rules induced from known examples are used to classify unseen cases. There are various rule induction algorithms, such as LEM1 (Learning from Examples Module version 1), LEM2 (Learning from Examples Module version 2) and MLEM2 (Modified Learning from Examples Module version 2). In the real world, many data sets are imperfect, may be incomplete. The idea of the probabilistic approximation, has been used for many years in variable precision rough set models and similar approaches to uncertainty. The objective of this project is to test whether proper probabilistic approximations are better than concept lower and upper approximations. In this project, experiments were conducted on six incomplete data sets with lost values. We implemented the local probabilistic version of MLEM2 algorithm to induce certain and possible rules from incomplete data sets. A program called Rule Checker was also developed to classify unseen cases with induced rules and measure the classification error rate. Hold-out validation was carried out and the error rate was used as the criterion for comparison. 


Lokesh Kaki - An Automatic Image Stitching Software with Customizable Parameters and a Graphical User Interface

When & Where:

May 29, 2019 - 11:00 AM
2001 B Eaton Hall

Committee Members:

Richard Wang, Chair
Esam El-Araby
Jerzy Grzymala-Busse

Abstract

Image stitching is one of the most widely used Computer Vision algorithms with a broad range of applications, such as image stabilization, high-resolution photomosaics, object insertion, 3D image reconstruction, and satellite imaging. The process of extracting image features from each input image,  determining the image matches, and then estimating the homography for each matched image is the necessary procedure for most of the feature-based image stitching techniques. In recent years, several state-of-the-art techniques like scale-invariant feature transform (SIFT), random sample consensus (RANSAC), and direct linear transformation (DLT) have been proposed for feature detection, extraction, matching, and homography estimation. However, using these algorithms with fixed parameters does not usually work well in creating seamless, natural-looking panoramas. The set of parameter values which work best for specific images may not work equally well for another set of images taken by a different camera or in varied conditions. Hence, the parameter tuning is as important as choosing the right set of algorithms for the efficient performance of any image stitching algorithm.

In this project, a graphical user interface is designed and programmed to tune a total of 32 parameters, including some of the basic ones such as straitening, cropping, setting the maximum output image size, and setting the focal length.  It also contains several advanced parameters like specifying the number of RANSAC iterations, RANSAC inlier threshold, extrema threshold, Gaussian window size, etc. The image stitching algorithm used in this project comprises of SIFT, DLT, RANSAC, warping, straightening, bundle adjustment, and blending techniques. Once the given images are stitched together, the output image can be further analyzed inside the user interface by clicking on any particular point. Then, it returns the corresponding input image, which contributed to the selected point, and its GPS coordinates, altitude, and camera focal length given by its metadata. The developed software has been successfully tested on various diverse datasets, and the customized parameters with corresponding results, as well as timer logs are tabulated in this report. The software is built for both Windows and Linux operating systems as part of this project.

 


Mohammad Isyroqi Fathan - Comparative Study on Polyp Localization and Classification on Colonoscopy Video

When & Where:

May 28, 2019 - 3:30 PM
250 Nichols Hall

Committee Members:

Guanghui Wang, Chair
Bo Luo
James Miller

Abstract

Colorectal cancer is one of the most common types of cancer with a high mortality rate. It typically develops from small clumps of benign cells called polyp. The adenomatous polyp has a higher chance of developing into cancer compared to the hyperplastic polyp. Colonoscopy is the preferred procedure for colorectal cancer screening and to minimize its risk by performing a biopsy on found polyps. Thus, a good polyp detection model can assist physicians and increase the effectiveness of colonoscopy. Several models using handcrafted features and deep learning approaches have been proposed for the polyp detection task.  

In this study, we compare the performances of the previous state-of-the-art general object detection models for polyp detection and classification (into adenomatous and hyperplastic class).  Specifically, we compare the performances of FasterRCNN, SSD, YOLOv3, RefineDet, RetinaNet, and FasterRCNN with DetNet backbone. This comparative study serves as an initial analysis of the effectiveness of these models and to choose a base model that we will improve further for polyp detection.


Lei Wang - I Know What You Type on Your Phone: Keystroke Inference on Android Device Using Deep Learning

When & Where:

May 28, 2019 - 2:00 PM
246 Nichols Hall

Committee Members:

Bo Luo, Chair
Fengjun Li
Guanghui Wang

Abstract

Given a list of smartphone sensor readings, such as accelerometer, gyroscope and light sensor, is there enough information present to predict a user’s input without access to either the raw text or keyboard log? With the increasing usage of smartphones as personal devices to access sensitive information on-the-go has put user privacy at risk. As the technology advances rapidly, smartphones now equip multiple sensors to measure user motion, temperature and brightness to provide constant feedback to applications in order to receive accurate and current weather forecast, GPS information and so on. In the ecosystem of Android, sensor reading can be accessed without user permissions and this makes Android devices vulnerable to various side-channel attacks.

In this thesis, we first create a native Android app to collect approximately 20700 keypresses from 30 volunteers. The text used for the data collection is carefully selected based on the bigram analysis we run on over 1.3 million tweets. We then present two approaches (single key press and bigram) for feature extraction, those features are constructed using accelerometer, gyroscope and light sensor readings. A deep neural network with four hidden layers is proposed as the baseline for this work, which achieves an accuracy of 47% using categorical cross entropy as the accuracy metric. A multi-view model then is proposed in the later work and multiple views are extracted and performance of the combination of each view is compared for analysis.


Wenchi Ma - Deep Neural Network based Object Detection and Regularization in Deep Learning

When & Where:

May 24, 2019 - 1:30 PM
246 Nichols Hall

Committee Members:

Richard Wang, Chair
Arvin Agah
Bo Luo
Heechul Yun
Haiyang Chao

Abstract

The abilities of feature learning, scene understanding, and task generalization are the consistent pursuit in deep learning-based computer vision. A number of object detectors with various network structures and algorithms have been proposed to learn more effective features, to extract more contextual and semantic information, and to achieve more robust and more accurate performance on different datasets. Nevertheless, the problem is still not well addressed in practical applications. One major issue lies in the inefficient feature learning and propagation in challenging situations like small objects, occlusion, illumination, etc. Another big issue is the poor generalization ability on datasets with different feature distribution. 

The study aims to explore different learning frameworks and strategies to solve the above issues. (1) We propose a new model to make full use of different features from details to semantic ones for better detection of small and occluded objects. The proposed model emphasizes more on the effectiveness of semantic and contextual information from features produced in high-level layers. (2) To achieve more efficient learning, we propose the near-orthogonality regularization, which takes the neuron redundancy into consideration, to generate better deep learning models. (3) We are currently working on tightening the object localization by integrating the localization score into a non-maximum suppression (NMS) to achieve more accurate detection results, and on the domain adaptive learning that encourages the learning models to acquire higher generalization ability of domain transfer. 

 


MAHDI JAFARISHIADEH - New Topology and Improved Control of Modular Multilevel Based Converters

When & Where:

May 20, 2019 - 1:00 PM
2001 B Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
James Stiles
Xiaoli (Laura) Li

Abstract

Trends toward large-scale integration and the high-power application of green energy resources necessitate the advent of efficient power converter topologies, multilevel converters. Multilevel inverters are effective solutions for high power and medium voltage DC-to-AC conversion due to their higher efficiency, provision of system redundancy, and generation of near-sinusoidal output voltage waveform. Recently, modular multilevel converter (MMC) has become increasingly attractive. To improve the harmonic profile of the output voltage, there is the need to increase the number of output voltage levels. However, this would require increasing the number of submodules (SMs) and power semi-conductor devices and their associated gate driver and protection circuitry, resulting in the overall multilevel converter to be complex and expensive. Specifically, the need for large number of bulky capacitors in SMs of conventional MMC is seen as a major obstacle. This work proposes an MMC-based multilevel converter that provides the same output voltage as conventional MMC but has reduced number of bulky capacitors. This is achieved by introduction of an extra middle arm to the conventional MMC. Due to similar dynamic equations of the proposed converter with conventional MMC, several previously developed control methods for voltage balancing in the literature for conventional MMCs are applicable to the proposed MMC with minimal effort. Comparative loss analysis of the conventional MMC and the proposed multilevel converter under different power factors and modulation indexes illustrates the lower switching loss of proposed MMC. In addition, a new voltage balancing technique based on carrier-disposition pulse width modulation for modular multilevel converter is proposed.

The second part of this work focuses on an improved control of MMC-based high-power DC/DC converters. Medium-voltage DC (MVDC) and high-voltage DC (HVDC) grids have been the focus of numerous research studies in recent years due to their increasing applications in rapidly growing grid-connected renewable energy systems, such as wind and solar farms. MMC-based DC/DC converters are employed for collecting power from renewable energy sources. Among various developed DC/DC converter topologies, MMC-based DC/DC converter with medium-frequency (MF) transformer is a valuable topology due to its numerous advantages. Specifically, they offer a significant reduction in the size of the MMC arm capacitors along with the ac-link transformer and arm inductors due to the ac-link transformer operating at medium frequencies. As such, this work focuses on improving the control of isolated MMC-based DC/DC (IMMDC) converters. The single phase shift (SPS) control is a popular method in IMMDC converter to control the power transfer. This work proposes conjoined phase shift-amplitude ratio index (PSAR) control that considers amplitude ratio indexes of MMC legs of MF transformer’s secondary side as additional control variables. Compared with SPS control, PSAR control not only provides wider transmission power range and enhances operation flexibility of converter, but also reduces current stress of medium-frequency transformer and power switches of MMCs. An algorithm is developed for simple implementation of the PSAR control to work at the least current stress operating point. Hardware-in-the-loop results confirm the theoretical outcomes of the proposed control method.


Luyao Shang - Memory Based Luby Transform Codes for Delay Sensitive Communication Systems

When & Where:

May 15, 2019 - 2:00 PM
246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Taejoon Kim
David Petr
Tyrone Duncan

Abstract

As the upcoming fifth-generation (5G) and future wireless network is envisioned in areas such as augmented and virtual reality, industrial control, automated driving or flying, robotics, etc, the requirement of supporting ultra-reliable low-latency communications (URLLC) is increasingly urgent than ever. From the channel coding perspective, URLLC requires codewords being transported in finite block-lengths. In this regards, we propose novel encoding algorithms and analyze their performance behaviors for the finite-length Luby transform (LT) codes.

Luby transform (LT) codes, the first practical realization and the fundamental core of fountain codes, play a key role in the fountain codes family. Recently, researchers show that the performance of LT codes for finite block-lengths can be improved by adding memory into the encoder. However, this work only utilizes one memory, leaving the possibilities of exploiting and how to exploiting more memories an open problem. To explore this unknown, this proposed research targets to 1) propose an encoding algorithm to utilize one more memory and compare its performance with the existing work; 2) generalize the memory based encoding method to arbitrary memory orders and mathematically analyze its performance; 3) find out the optimal memory order in terms of bit error rate (BER), frame error rate (FER), and decoding convergence speed; 4) Apply the memory based encoding algorithm to additive white Gaussian noise (AWGN) channels and analyze its performance.


Saleh Mohamed Eshtaiwi - A New Three Phase Photovoltaic Energy Harvesting System for Generation of Balanced Voltages in Presence of Partial Shading, Module Mismatch, and Unequal Maximum Power Points

When & Where:

May 14, 2019 - 10:00 AM
2001 B Eaton Hall

Committee Members:

Reza Ahmadi , Chair
Christopher Allen
Jerzy Grzymala-Busse
Rongqing Hui
Elaina Sutley

Abstract

The worldwide energy demand is growing quickly, with an anticipated rate of growth of 48% from 2012 to 2040. Consequently, investments in all forms of renewable energy generation systems have been growing rapidly. Increased use of clean renewable energy resources such as hydropower, wind, solar, geothermal, and biomass is expected to noticeably renewable energy resources alleviate many present environmental concerns associated with fossil fuel-based energy generation.  In recent years, wind and solar energies are gained the most attention among all other renewable resources. As a result, both have become the target of extensive research and development for dynamic performance optimization, cost reduction, and power reliability assurance.  

The performance of Photovoltaic (PV) systems is highly affected by environmental and ambient conditions such as irradiance fluctuations and temperature swings. Furthermore, the initial capital cost for establishing the PV infrastructure is very high. Therefore, its essential that the PV systems always harvest the maximum energy possible by operating at the most efficient operating point, i.e. Maximum Power Point (MPP), to increase conversion efficiency and thus result in lowest cost of captured energy.

The dissertation is an effort to develop a new PV conversion system for large scale PV grid-connected systems which provides efficacy enhancements compared to conventional systems by balancing voltage mismatches between the PV modules. Hence, it analyzes the theoretical models for three selected DC/DC converters. To accomplish this goal, this work first introduces a new adaptive maximum PV energy extraction technique for PV grid-tied systems. Then, it supplements the proposed technique with a global search approach to distinguish absolute maximum power peaks within multi-peaks in case of partially shaded PV module conditions. Next, it proposes an adaptive MPP tracking (MPPT) strategy based on the concept of model predictive control (MPC) in conjunction with a new current sensor-less approach to reduce the number of required sensors in the system.  Finally, this work proposes a power balancing technique for injection of balanced three-phase power into the grid using a Cascaded H-Bridge (CHB) converter topology which brings together the entire system and results in the final proposed PV power system. The resulting PV system offers enhanced reliability by guaranteeing effective system operation under unbalanced phase voltages caused by severe partial shading.

The developed grid connected PV solar system is evaluated using simulations under realistic dynamic ambient conditions, partial shading, and fully shading conditions and the obtained results confirm its effectiveness and merits comparted to conventional systems.


Shruti Goel - DDoS Intrusion Detection using Machine Learning Techniques

When & Where:

May 13, 2019 - 3:00 PM
250 Nichols Hall

Committee Members:

Alex Bardas, Chair
Fengjun Li
Bo Luo

Abstract

Organizations are becoming more exposed to security threats due to shift towards cloud infrastructure and IoT devices. One growing category of cyber threats is Distributes Denial of Service (DDoS) attacks. It is hard to detect DDoS attacks due to evolving attack patterns and increasing data volume. So, creating filter rules manually to distinguish between legitimate and malicious traffic is a complex task. Current work explores a supervised machine learning based approach for DDoS detection. The proposed model uses a step forward feature selection method to extract 15 best network features and random forest classifier for detecting DDoS traffic. This solution can be used as an automatic detection algorithm for DDoS mitigation pipelines implemented in the most up-to-date DDoS security solutions.


Hayder Almosa - Downlink Achievable Rate Analysis for FDD Massive MIMO Systems

When & Where:

May 13, 2019 - 1:00 PM
129 Nichols Hall

Committee Members:

Erik Perrins , Chair
Lingjia Liu
Shannon Blunt
Rongqing Hui
Hongyi Cai

Abstract

Multiple-Input Multiple-Output (MIMO) systems with large-scale transmit antenna arrays, often called massive MIMO, are a very promising direction for 5G due to their ability to increase capacity and enhance both spectrum and energy efficiency. To get the benefit of massive MIMO systems, accurate downlink channel state information at the transmitter (CSIT) is essential for downlink beamforming and resource allocation. Conventional approaches to obtain CSIT for FDD massive MIMO systems require downlink training and CSI feedback. However, such training will cause a large overhead for massive MIMO systems because of the large dimensionality of the channel matrix. In this dissertation, we improve the performance of FDD massive MIMO networks in terms of downlink training overhead reduction, by designing an efficient downlink beamforming method and developing a new algorithm to estimate the channel state information based on compressive sensing techniques. First, we design an efficient downlink beamforming method based on partial CSI. By exploiting the relationship between uplink direction of arrivals (DoAs) and downlink direction of departures (DoDs), we derive an expression for estimated downlink DoDs, which will be used for downlink beamforming. Second, By exploiting the sparsity structure of downlink channel matrix, we develop an algorithm that selects the best features from the measurement matrix to obtain efficient CSIT acquisition that can reduce the downlink training overhead compared with conventional LS/MMSE estimators. In both cases, we compare the performance of our proposed beamforming method with traditional methods in terms of downlink achievable rate and simulation results show that our proposed method outperform the traditional beamforming methods.​


Naresh Kumar Sampath Kumar - Complexity of Rules Sets in Mining Incomplete Data Using Characteristic Sets and Generalized Maximal Consistent Blocks

When & Where:

May 13, 2019 - 10:00 AM
2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang

Abstract

The process of going through data to discover hidden connections and predict future trends has a long history. In this data-driven world, data mining is an important process to extract knowledge or insights from data in various forms. It explores the unknown credible patterns which are significant in solving many problems. There are quite a few techniques in data mining including classification, clustering, and prediction. We will discuss the classification, by using a technique called rule induction using four different approaches.

We compare the complexity of rule sets induced using characteristic sets and maximal consistent blocks. The complexity of rule sets is determined by the total number of rules induced for a given data set and the total number of conditions present in each rule. We used Incomplete Data sets to induce rules. These data sets have missing attribute values. Both methods were implemented and analyzed to check how it influences the complexity. Preliminary results suggest that the choice between characteristic sets and generalized maximal consistent blocks is inconsequential. But the cardinality of the rule sets is always smaller for incomplete data sets with “do not care” conditions. Thus, the choice between interpretations of the missing attribute value is more important than the choice between characteristic sets and generalized maximal consistent blocks.


Usman Sajid - ZiZoNet: A Zoom-In and Zoom-Out Mechanism for Crowd Counting in Static Images

When & Where:

May 13, 2019 - 1:30 AM
246 Nichols Hall

Committee Members:

Guanghui Wang, Chair
Bo Luo
Heechul Yun

Abstract

As people gather during different social, political or musical events, automated crowd analysis can lead to effective and better management of such events to prevent any unwanted scene as well as avoid political manipulation of crowd numbers. Crowd counting remains an integral part of crowd analysis and also an active research area in the field of computer vision. Existing methods fail to perform where crowd density is either too high or too low in an image, thus resulting in either overestimation or underestimation. These methods also mix crowd-like cluttered background regions (e.g. tree leaves or small and continuous patterns) in images with actual crowd, resulting in further crowd overestimation. In this work, we present a novel deep convolutional neural network (CNN) based framework ZiZoNet for automated crowd counting in static images in very low to very high crowd density scenarios to address above issues. ZiZoNet consists of three modules namely Crowd Density Classifier (CDC), Decision Module (DM) and Count Regressor Module (CRM). The test image, divided into 224x224 patches, passes through crowd density classifier (CDC) that classifies each patch to a class label (no-crowd (NC), low-crowd (LC), medium-crowd (MC), high-crowd (HC)). Based on the CDC information and using either heuristic Rule-set Engine (RSE) or machine learning based Random Forest based Decision Block (RFDB), DM decides which mode (zoom-in, normal or zoom-out) this image should use for crowd counting. CRM then performs patch-wise crowd estimate for this image accordingly as decided or instructed by the DM module. Extensive experiments on three diverse and challenging crowd counting benchmarks (UCF-QNRF, ShanghaiTech, AHU-Crowd) show that our method outperforms current state-of-the-art models under most of the evaluation criteria.​


Ernesto Alexander Ramos - Tunable Surface Plasmon Dynamics

When & Where:

May 10, 2019 - 3:00 PM
2001 B Eaton Hall

Committee Members:

Alessandro Salandrino, Chair
Christopher Allen
Rongqing Hui

Abstract

Due to their extreme spatial confinement, surface plasmon resonances show great potential in the design of future devices that would blur the boundaries between electronics and optics. Traditionally, plasmonic interactions are induced with geometries involving noble metals and dielectrics. However, accessing these plasmonic modes requires delicate election of material parameters with little margin for error, controllability, or room for signal bandwidth. To rectify this, two novel plasmonic mechanisms with a high degree of control are explored: For the near infrared region, transparent conductive oxides (TCOs) exhibit tunability not only in "static" plasmon generation (through material doping) but could also allow modulation on a plasmon carrier through external bias induced switching. These effects rely on the electron accumulation layer that is created at the interface between an insulator and a doped oxide. Here a rigorous study of the electromagnetic characteristics of these electron accumulation layers is presented. As a consequence of the spatially graded permittivity profiles of these systems it will be shown that these systems display unique properties. The concept of Accumulation-layer Surface Plasmons (ASP) is introduced and the conditions for the existence or for the suppression of surface-wave eigenmodes are analyzed. A second method could allow access to modes of arbitrarily high order. Sub-wavelength plasmonic nanoparticles can support an infinite discrete set of orthogonal localized surface plasmon modes, however only the lowest order resonances can be effectively excited by incident light alone. By allowing the background medium to vary in time, novel localized surface plasmon dynamics emerge. In particular, we show that these temporal permittivity variations lift the orthogonality of the localized surface plasmon modes and introduce coupling among different angular momentum states. Exploiting these dynamics, surface plasmon amplification of high order resonances can be achieved under the action of a spatially uniform optical pump of appropriate frequency.


Nishil Parmar - A Comparison of Quality of Rules Induced using Single Local Probabilistic Approximations vs Concept Probabilistic Approximations

When & Where:

May 10, 2019 - 2:00 PM
1415A LEEP2

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo

Abstract

This project report presents results of experiments on rule induction from incomplete data using probabilistic approximations. Mining incomplete data using probabilistic approximations is a well-established technique. Main goal of this report is to present research on a comparison carried out on two different approaches to mining incomplete data using probabilistic approximations: single local probabilistic approximations approach and concept probabilistic approximations. These approaches were implemented in python programming language and experiments were carried out on incomplete data sets with two interpretations of missing attribute values: lost values and do not care conditions. Our main objective was to compare concept and single local approximations in terms of the error rate computed using double hold-out method for validation. For our experiments we used seven incomplete data sets with many missing attribute values. The best results were accomplished by concept probabilistic approximations for five data sets and by single local probabilistic approximations for remaining two data sets.


Victor Berger da Silva - Probabilistic graphical techniques for automated ice-bottom tracking and comparison between state-of-the-art solutions

When & Where:

May 10, 2019 - 2:00 PM
317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
John Paden
Guanghui Wang

Abstract

Multichannel radar depth sounding systems are able to produce two-dimensional and three-dimensional imagery of the internal structure of polar ice sheets. One of the relevant features typically present in this imagery is the ice-bedrock interface, which is the boundary between the bottom of the ice-sheet and the bedrock underneath. Crucial information regarding the current state of the ice sheets, such as the thickness of the ice, can be derived if the location of the ice-bedrock interface is extracted from the imagery. Due to the large amount of data collected by the radar systems employed, we seek to automate the extraction of the ice-bedrock interface and allow for efficient manual corrections when errors occur in the automated method. We present improvements made to previously proposed solutions which pose feature extraction in polar radar imagery as an inference problem on a probabilistic graphical model. The improvements proposed here are in the form of novel image pre-processing steps and empirically-derived cost functions that allow for the integration of further domain-specific knowledge into the models employed. Along with an explanation of our modifications, we demonstrate the results obtained by our proposed models and algorithms, including significantly decreased mean error measurements such as a 47% reduction in average tracking error in the case of three-dimensional imagery. We also present the results obtained by several state-of-the-art ice-interface tracking solutions, and compare all automated results with manually-corrected ground-truth data. Furthermore, we perform a self-assessment of tracking results by analyzing the differences found between the automatically extracted ice-layers in cases where two separate radar measurements have been made at the same location.


Dain Vermaak - Visualizing, and Analyzing Student Progress on Learning Maps

When & Where:

May 10, 2019 - 11:00 AM
1 Eaton Hall, Dean's Conference Room

Committee Members:

James Miller, Chair
Man Kong
Suzanne Shontz
Guanghui Wang
Bruce Frey

Abstract

A learning map is an unweighted directed graph containing relationships between discrete skills and concepts with edges defining the prerequisite hierarchy. They arose as a means of connecting student instruction directly to standards and curriculum and are designed to assist teachers in lesson planning and evaluating student response. As learning maps gain popularity there is an increasing need for teachers to quickly evaluate which nodes have been mastered by their students. Psychometrics is a field focused on measuring student performance and includes the development of processes used to link a student's response to multiple choice questions directly to their understanding of concepts. This dissertation focuses on developing modeling and visualization capabilities to enable efficient analysis of data pertaining to student understanding generated by psychometric techniques.

Such analysis naturally includes that done by classroom teachers. Visual solutions to this problem clearly indicate the current understanding of a student or classroom in such a way as to make suggestions that can guide future learning. In response to these requirements we present various experimental approaches which augment the original learning map design with targeted visual variables.

As well as looking forward, we also consider ways in which data visualization can be used to evaluate and improve existing teaching methods. We present several graphics based on modelling student progression as information flow. These methods rely on conservation of data to increase edge information, reducing the load carried by the nodes and encouraging path comparison.

In addition to visualization schemes and methods, we present contributions made to the field of Computer Science in the form of algorithms developed over the course of the research project in response to gaps in prior art. These include novel approaches to simulation of student response patterns, ranked layout of weighted directed graphs with variable edge widths, and enclosing certain groups of graph nodes in envelopes.

Finally, we present a final design which combines the features of key experimental approaches into a single visualization tool capable of meeting both predictive and validation requirements along with the methods used to measure the effectiveness and correctness of the final design.


Priyanka Saha - Complexity of Rule Sets Induced from Incomplete Data with Lost Values and Attribute-Concept Values

When & Where:

May 10, 2019 - 10:00 AM
2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Taejoon Kim
Cuncong Zhong

Abstract

Data is a very rich source of knowledge and information. However, special techniques need to be implemented in order to extract interesting facts and discover patterns in large data sets. This is achieved using the technique called Data Mining. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information from a data set and transform the information into a comprehensible structure for further use. Rule induction is a Data Mining technique in which formal rules are extracted from a set of observations. The rules induced may represent a full scientific model of the data, or merely represent local patterns in the data.

The data sets, however, is not always complete and might contain missing values. Data mining also provides techniques to handle the missing values in a data set. In this project, we’ve implemented lost value and attribute-concept value interpretations of incomplete data. Experiments were conducted on 176 datasets using three types of approximations (lower, middle and upper) of the concept and Modified Learning from Examples Module, version 2 (MLEM2) rule induction algorithm was used to induce rule sets.

The goal of the project was to prove that the complexity of rule sets derived from datasets having missing attributes is better for attribute-concept value interpretation compared to the lost value interpretation. The size of the rule set was always smaller for the attribute-concept value interpretation. Also, as a secondary objective, we tried to explore what type of approximation provides the smallest size of the rule sets.


Mohanad Al-Ibadi - Array Processing Techniques for Estimating and Tracking of an Ice-Sheet Bottom

When & Where:

May 10, 2019 - 9:00 AM
317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
John Paden
Christopher Allen
Erik Perrins
James Stiles

Abstract

   Ice bottom topography layers are an important boundary condition required to model the flow dynamics of an ice sheet. In this work, using low frequency multichannel radar data, we locate the ice bottom using two types of automatic trackers.

   First, we use the multiple signal classification (MUSIC) beamformer to determine the pseudo-spectrum of the targets at each range-bin. The result is passed into a sequential tree-reweighted message passing belief-propagation algorithm to track the bottom of the ice in the 3D image. This technique is successfully applied to process data collected over the Canadian Arctic Archipelago ice caps, and produce digital elevation models (DEMs) for 102 data frames. We perform crossover analysis to self-assess the generated DEMs, where flight paths cross over each other and two measurements are made at the same location. Also, the tracked results are compared before and after manual corrections. We found that there is a good match between the overlapping DEMs, where the mean error of the crossover DEMs is 38+7 m, which is small relative to the average ice-thickness, while the average absolute mean error of the automatically tracked ice-bottom, relative to the manually corrected ice-bottom, is 10 range-bins.

  Second, a direction of arrival (DOA)-based tracker is used to estimate the DOA of the backscatter signals sequentially from range bin to range bin using two methods: a sequential maximum a posterior probability (S-MAP) estimator and one based on the particle filter (PF). A dynamic flat earth transition model is used to model the flow of information between range bins. A simulation study is performed to evaluate the performance of these two DOA trackers. The results show that the PF-based tracker can handle low-quality data better than S-MAP, but, unlike S-MAP, it saturates quickly with increasing numbers of snapshots. Also, S-MAP is successfully applied to track the ice-bottom of several data frames collected over Russell glacier, and the results are compared against those generated by the beamformer-based tracker. The results of the DOA-based techniques are the final tracked surfaces, so there is no need for an additional tracking stage as there is with the beamformer technique.


Jason Gevargizian - MSRR: Leveraging dynamic measurement for establishing trust in remote attestation

When & Where:

April 25, 2019 - 11:00 AM
246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Arvin Agah
Perry Alexander
Bo Luo
Kevin Leonard

Abstract

Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Runtime measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote system (appraisal system). However, existing runtime measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. 

In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual runtime measurement. We describe the MSRR (MeaSeReR) Measurement Suite, a system of tools designed with the primary goal of reducing the high degree of manual effort required to produce measurement solutions at a per application basis.

The MSRR suite prototypes a novel general-purpose measurement system, the MSRR Measurement System, that is agnostic of the target application. Furthermore, we describe a robust high-level measurement policy language, MSRR-PL, that can be used to write per application policies for the MSRR Measurer. Finally, we provide a tool to automatically generate MSRR-PL policies for target applications by leveraging state of the art static analysis tools.

In this work, we show how the MSRR suite can be used to significantly reduce the time and effort spent on designing measurers anew for each application. We describe MSRR's robust querying language, which allows the appraisal system to accurately specify the what, when, and how to measure. We describe the capabilities and the limitations of our measurement policy generation tool. We evaluate MSRR's overhead and demonstrate its functionality by employing real-world case studies. We show that MSRR has an acceptable overhead on a host of applications with various measurement workloads.


Surya Nimmakayala - Heuristics to predict and eagerly translate code in DBTs

When & Where:

April 19, 2019 - 10:00 AM
2001 B Eaton Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

Dynamic Binary Translators(DBTs) have a variety of uses, like instrumentation,
profiling, security, portability, etc. In order for the desired application to run
with these enhanced additional features(not originally part of its design), it is to be run
under the control of Dynamic Binary Translator. The application can be thought of as the
guest application, to be run with in a controlled environment of the translator,
which would be the host application. That way, the intended application execution
flow can be enforced by the translator, thereby inducing the desired behavior in
the application on the host platform(combination of Operating System and Hardware).

However, there will be a run-time/execution-time overhead in the translator, when performing the
additional tasks to run the guest application in a controlled fashion. This run-time
overhead has been limiting the usage of DBT's on a large scale, where response times can be critical.
There is often a trade-off between the benefits of using a DBT against the overall application response
time. So, there is a need to research/explore ways to faster application execution through DBT's(given
their large code-base).

With the evolution of the multi-core and GPU hardware architectures, multilpe concurrent threads can get
more work done through parallelization. A proper design of parallel applications or parallelizing parts of existing
serial code, can lead to improved application run-time's through hardware architecture support.

We explore the possibility of improving the performance of a DBT named DynamoRIO. The basic idea is to improve
its performance by speeding-up the process of guest code translation, through multiple threads translating
multiple pieces of code concurrently. In an ideal case, all the required code blocks for application
execution are readily available ahead of time without any stalls. For efficient eager translation, there is
also a need for heuristics to better predict the next code block to be executed. That could potentially
bring down the less productive code translations at run-time. The goal is to get application speed-up through
eager translation and block prediction heuristics, with execution time close to native run.


FARHAD MAHMOOD - Modeling and Analysis of Energy Efficiency in Wireless Handset Transceiver Systems

When & Where:

April 16, 2019 - 3:00 PM
Apollo Room, Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Victor Frost
Lingjia Liu
Bozenna Pasik-Duncan

Abstract

As wireless communication devices are taking a significant part in our daily life, research steps toward making these devices even faster and smarter are accelerating rapidly. The main limiting factors are energy and power consumption. Many techniques are utilized to increase the battery’s capacity (Ampere per Hour), but that comes with a cost of raising the safety concerns. The other way to increase the battery’s life is to decrease the energy consumption of the devices. In this work, we analyze energy-efficient communications for wireless devices based on an advanced energy consumption model that takes into account a broad range of parameters. The developed model captures relationships between transmission power, transceiver distance, modulation order, channel fading, power amplifier (PA) effects, power control, multiple antennas, as well as other circuit components in the radio frequency (RF) transceiver. Based the developed model, we are able to identify the optimal modulation order in terms of energy efficiency under different situations (e.g., different transceiver distance, different PA classes and efficiencies, different pulse shape, etc). Furthermore, we capture the impact of system level and the impact of network level on the PA energy via peak to average ratio (PAR) and power control. We are also able to identify the impact of multiple antennas at the handset on the energy consumption and the transmitted bit rate for few and many antennas (conventional multiple-input-multiple-output (MIMO) and  massive MIMO) at the base station. This work provides an important framework for analyzing energy-efficient communications for different wireless systems ranging from cellular networks to wireless internet of things.


DANA HEMMINGSEN - Waveform Diverse Stretch Processing

When & Where:

March 27, 2019 - 3:00 PM
Apollo Room, Nichols Hall

Committee Members:

Shannon Blunt, Chair
Christopher Allen
James Stiles

Abstract

​Stretch processing with the use of a wideband LFM transmit waveform is a commonly used technique, and its popularity is in large part due to the large time-bandwidth product that provides fine range resolution capabilities for applications that require it. It allows pulse compression of echoes at a much lower sampling bandwidth without sacrificing any range resolution. Previously, this technique has been restrictive in terms of waveform diversity because the literature shows that the LFM is the only type of waveform that will result in a tone after stretch processing. However, there are also many examples in the literature that demonstrate an ability to compensate for distortions from an ideal LFM waveform structure caused by various hardware components in the transmitter and receiver. This idea of compensating for variations is borrowed here, and the use of nonlinear FM (NLFM) waveforms is proposed to facilitate more variety in wideband waveforms that are usable with stretch processing. A compensation transform that permits the use of these proposed NLFM waveforms replaces the final fast Fourier transform (FFT) stage of the stretch processing configuration, but the rest of the RF receive chain remains the same. This modification to the receive processing structure makes possible the use of waveform diversity for legacy radar systems that already employ stretch processing. Similarly, using the same concept of compensating for distortions to the LFM structure along with the notion that a Fourier transform is essentially the matched filter bank for an LFM waveform mixed with an LFM reference, a least-squares based mismatched filtering (MMF) scheme is proposed. This MMF could likewise be used to replace the final FFT stage, and can also facilitate the application of NLFM waveforms to legacy radar systems.     The efficacy of these filtering approaches (compensation transform and least-squares based MMF) are demonstrated in simulation and experimentally using open-air measurements and are applied to different scenarios of NLFM waveform to assess the results and provide a means of comparison between the two techniques.


DANIEL GOMEZ GARCIA ALVESTEGUI - Scattering Analysis and Ultra-Wideband Radar for High-Throughput Phenotyping of Wheat Canopies

When & Where:

February 4, 2019 - 1:15 PM
317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Christopher Allen
Ron Hui
Fernando Rodriguez-Morales
David Braaten

Abstract

Rising the yield of wheat crops is essential to meet the projected future demands of consumption and it is expected that most yield increases will be associated to improvements in biomass accumulation. Cultivars with canopy architectures that focus the light interception where photosynthetic-capacity is greater achieve larger biomass accumulation rates. Identifying varieties with improved traits could be performed with modern breeding methods, such as genomic-selection, which depend on genotype-phenotype mappings. Developing a non-destructive sensor with the capability of efficiently phenotyping wheat-canopy architecture parameters, such as height and vertical distribution of projected-leaf-area-density, would be useful for developing architecture-related genotype-phenotype maps of wheat cultivars. In this presentation, new scattering analysis tools and a new 2-18 GHz radar system are presented for efficiently phenotyping the architecture of wheat canopies.
The radar system presented was designed with the objective to measure the RCS profile of wheat canopies at close range. The frequency range (2-18 GHz), topology (Frequency-modulated-continuous-wave) and other radar parameters were chosen to meet that goal. Phase noise of self-interference signals is the main source of coherent and incoherent noise in FMCW radars. A new comprehensive noise analysis is presented, which predicts the power-spectral-density of the noise at the output of FMCW radars,
including those related to phase noise. The new 2-18 GHz chirp generator is based on a phase-locked-loop that was designed with large loop bandwidth to suppress the phase noise of the chirp. Additionally, the radar RF front-end was designed to achieve low levels of LO-leakage and antenna feed-through, which are the main self-interference signals of FMCW radars.
In addition to the radar system, a new efficient radar simulator was developed to predict the RCS waveforms collected from wheat canopies over the 2-18 GHz frequency range. The coherent radar simulator is based on novel geometric and fully-polarimetric scattering models of wheat canopies. The scattering models of wheat canopies, leaves with arbitrary orientation and curvature, stems and heads were validated using a full-wave commercial simulator and measurements. The radar simulator was used to derive retrieval algorithms of canopy height and projected-leaf-area-density from RCS profiles, which were tested with field-collected measurements.


AISHWARYA BHATNAGAR - Autonomous surface detection and tracking for FMCW Snow Radar using field programmable gate arrays

When & Where:

January 30, 2019 - 2:00 PM
317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Christopher Allen
Fernando Rodriguez-Morales

Abstract

Sea ice in polar regions is typically covered with a layer of snow. The thermal insulation properties and high albedo of the snow cover insulates the sea ice beneath it, maintaining low temperatures and limiting ice melt, and thus affecting sea ice thickness and growth rates. Remote sensing of snow cover thickness plays a major role in understanding the mass balance of sea ice, inter-annual variability of snow depth, and other factors which directly impact climate change. The Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas has developed an ultra-wide band FMCW Snow Radar used to measure snow thickness and map internal layers of polar firn. The radar’s deployment on high-endurance, fixed-wing aircraft makes it difficult to track the surface from these platforms, due to turbulence and a limited range window. In this thesis, an automated onboard real-time surface tracker for the snow radar is presented to detect the snow surface elevation from the aircraft and track changes in the surface elevation. For an FMCW radar to have a long-range (high altitude) capability, a reference chirp delaying ability is a necessity to maintain a relatively constant beat frequency. Currently, the radar uses a filter bank to bandpass the received IF signal and store the spectral power in each band by utilizing different Nyquist zones. During airborne missions in polar regions with the radar, the operator has to manually switch the filter banks one by one as the aircraft elevation from the surface increases. The work done in this thesis aims at eliminating the manual switching operation and providing the radar with surface detection, chirp delay, and a constant beat frequency feedback loop in order to enhance its long range capability and ensure autonomous operation.​


Xinyang Rui - Performance Analysis of Mobile ad hoc Network Routing Protocols Using ns-3 Simulations

When & Where:

January 3, 2019 - 8:30 AM
246 Nichols Hall

Committee Members:

James Sterbenz , Chair
Bo Luo
Gary Minden

Abstract

Mobile ad hoc networks (MANETs) consist of mobile nodes that can communicate with each other through wireless links without the help of any infrastructure. The dynamic topology of MANETs poses a significant challenge for the design of routing protocols. Many routing protocols have been developed to discover routes in MANETs through different mechanisms such as source routing and link state routing. In this thesis, we present a comprehensive performance analysis of several prominent MANET routing protocols. The protocols studied are Destination Sequenced Distance Vector protocol (DSDV), Optimized Link State Routing protocol (OLSR), Ad hoc On-demand Distance Vector protocol (AODV), and Dynamic Source Routing (DSR). We evaluate their performance on metrics such as packet delivery ratio, end-to-end delay, and routing overhead through simulations in different scenarios with ns-3. These scenarios involve different node density, node velocity, and mobility models including Steady-State Random Waypoint, Gauss-Markov, and Lévy Walk. We believe this study will be helpful for the understanding of mobile routing dynamics, the improvement of current MANET routing protocols, and the development of new protocols.


Department Events
KU Today
High school seniors can apply to the SELF Program, a four-year enrichment and leadership experience
Engineering students build concrete canoes, Formula race cars, unmanned planes, and rockets for competitions nationwide
More first and second place awards in student AIAA aircraft design contests than any other school in the world
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
5th nationwide for service to veterans —"Best for Vets: Colleges," Military Times