• Home
  • Research
  • Defense Notices

 

All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

UPCOMING DEFENSE NOTICES

SUMANT PATHAK - A Performance and Channel Spacing Analysis of LDPC Coded APSK
MS Thesis Defense(EE)

When & Where:

July 5, 2018 - 11:00 AM
246 Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Taejoon Kim

Abstract

Amplitude-Phase Shift Keying (APSK) is a linear modulation format suitable for use in aeronautical telemetry due to it’s low peak-to-average power ratio (PAPR). How- ever, since the PAPR of APSK is not exactly unity (0 dB) in practice it must be used with power amplifiers operating with backoff. To compensate for the loss in power efficiency this work considers the pairing of Low-Density Parity Check (LDPC) codes with APSK. We consider the combinations of 16 and 32-APSK with rate 1/2, 2/3, 3/4, and 4/5 AR4JA LDPC codes with optimal and sub-optimal reduced complexity decoding algorithms. The loss in power efficiency due to sub-optimal decoding is characterized and the overall performance is compared to SOQPSK-TG to approximate the backoff capacity of a coded-APSK system. Another advantage of APSK based telemetry systems is the improved bandwidth efficiency. The second part of this work considers the adjacent channel spacing of a system with multiple configurations using coded-APSK and SOQPSK-TG. We consider different combinations of 16 and 32-APSK and SOQPSK-TG and find the minimum spacing between the respective waveforms that does not distort system performance.

 


 

DAVID MENAGER - A Cognitive Systems Approach to Explainable Autonomy
MS Thesis Defense(CS)

When & Where:

July 2, 2018 - 1:00 PM
2001B Eaton Hall

Committee Members:

Arvin Agah, Chair
Dongkyu Choi, co-chair
Michael Branicky
Andrew Williams

Abstract

Human computer interaction (HCI) and artificial intelligence (AI) research have greatly progressed over the years. Work in HCI aims to create cyberphysical systems that facilitate good interactions with humans, while artificial intelligence work aims to understand the causes of intelligent behavior and reproduce them on a computer. To this point, HCI systems typically avoid the AI problem, and AI researchers typically have focused on building system that work alone or with other AI systems, but de-emphasise human collaboration. In this thesis, we examine the role of episodic memory in constructing intelligent agents that can collaborate with and learn from humans. We present our work showing that agents with episodic memory capabilities can expose their internal decision-making process to users, and that an agent can learn relational planning operators from episodic traces.

 


 

PAST DEFENSE NOTICES


KRISHNA TEJA KARIDI - Improvements to the CReSIS HF-VHF Sounder and UHF Accumulation Radar

When & Where:

June 11, 2018 - 10:00 AM
317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Fernando Rodriquez-Morales, Co-Chair
Chris Allen

Abstract

This thesis documents the improvements made to a UHF radar system for snow accumulation measurements and the implementation of an airborne HF radar system for ice sounding. The HF sounder radar was designed to operate at two discrete frequency bands centered at 14.1 MHz and 31.5 MHz with a peak power level of 1 kW, representing an order-of-magnitude increase over earlier implementations. A custom transmit/receive module was developed with a set of lumped-element impedance matching networks suitable for integration on a Twin Otter Aircraft. The system was integrated and deployed to Greenland in 2016, showing improved detection capabilities for the ice/bottom interface in some areas of Jakobshavn Glacier and the potential for cross-track aperture formation to mitigate surface clutter. The performance of the UHF radar (also known as the CReSIS Accumulation radar) was significantly improved by transitioning from a single channel realization with 5-10 Watts peak transmit power into a multi-channel system with 1.6 kW. This was accomplished through developing custom transmit/receive modules capable of handling 400-W peak per channel and fast switching, incorporating a high-speed waveform generator and data acquisition system, and upgrading the baluns which feed the antenna elements. We demonstrated dramatically improved observation capabilities over the course of two different field seasons in Greenland onboard the NASA P-3.

 

 


SRAVYA ATHINARAPU - Model Order Estimation and Array Calibration for Synthetic Aperture Radar Tomography

When & Where:

June 8, 2018 - 10:00 AM
317 Nichols Hall

Committee Members:

Jim Stiles, Chair
John Paden, Co-Chair
Shannon Blunt

Abstract

The performance of several methods to estimate the number of source signals impinging on a sensor array are compared using a traditional simulator and their performance for synthetic aperture radar tomography is discussed as it is useful in the fields of radar and remote sensing when multichannel arrays are employed. All methods use the sum of the likelihood function with a penalty term. We consider two signal models for model selection and refer to these as suboptimal and optimal. The suboptimal model uses a simplified signal model and the model selection and direction of arrival estimation are done in separate steps. The optimal model uses the actual signal model and the model selection and direction of arrival estimation are done in the same step. In the literature, suboptimal model selection is used because of computational efficiency, but in our radar post processing we are less time constrained and we implement the optimal model for the estimation and compare the performance results. Interestingly we find several methods discussed in the literature do not work using optimal model selection, but can work if the optimal model selection is normalized. We also formulate a new penalty term, numerically tuned so that it gives optimal performance over a particular set of operating conditions, and compare this method as well. The primary contribution of this work is the development of an optimizer that finds a numerically tuned penalty term that outperforms current methods and discussion of the normalization techniques applied to optimal model selection. Simulation results show that the numerically tuned model selection criteria is optimal and that the typical methods do not do well for low snapshots which are common in radar and remote sensing applications. We apply the algorithms to data collected by the CReSIS radar depth sounder and discuss the results.

In addition to model order estimation, array model errors should be estimated to improve direction of arrival estimation. The implementation of a parametric-model is discussed for array calibration that estimates the first and second order array model errors. Simulation results for the gain, phase and location errors are discussed.


PRANJALI PARE - Development of a PCB with Amplifier and Discriminator for the Timing Detector in CMS-PPS

When & Where:

May 31, 2018 - 10:00 AM
2001B Eaton Hall

Committee Members:

Chris Allen, Chair
Christophe Royon, Co-Chair
Ron Hui
Carl Leuschen

Abstract

The Compact Muon Solenoid - Precision Proton Spectrometer (CMS-PPS) detector at the Large Hadron Collider (LHC) operates at high luminosity and is designed to measure forward scattered protons resulting from proton-proton interactions involving photon and Pomeron exchange processes. The PPS uses tracking and timing detectors for these measurements. The timing detectors measure the arrival time of the protons on each side of the interaction and their difference is used to reconstruct the vertex of the interaction. A good time precision (~10ps) on the arrival time is desired to have a good precision (~2mm) on the vertex position. The time precision is approximately equal to the ratio of the Root Mean Square (RMS) noise to the slew rate of the signal obtained from the detector.

Components of the timing detector include Ultra-Fast Silicon Detector (UFSD) sensors that generate a current pulse, transimpedance amplifier with shaping, and a discriminator. This thesis discusses the circuit schematic and simulations of an amplifier designed to have a time precision and the choice and simulation of discriminators with Low Voltage Differential Signal (LVDS) outputs. Additionally, details on the Printed Circuit Board (PCB) design including arrangement of components, traces, and stackup have been discussed for a 6-layer PCB that houses these three components. The PCB board has been manufactured and test results were performed to assess the functionality.

 


AMIR MODARRESI - Network Resilience Architecture and Analysis for Smart Cities

When & Where:

May 25, 2018 - 9:00 AM
246 Nichols Hall

Committee Members:

James Sterbenz, Chair
Victor Frost
Fengjun Li
Bo Luo
Cetinkaya Egemen

Abstract

The Internet of Things (IoT) is evolving rapidly to every aspect of human life including healthcare, homes, cities, and driverless vehicles that makes humans more dependent on the Internet and related infrastructure. While many researchers have studied the structure of the Internet that is resilient as a whole, new studies are required to investigate the resilience of the edge networks in which people and “things” connect to the Internet. Since the range of service requirements varies at the edge of the network, a wide variety of protocols are needed. In this research proposal, we survey standard protocols and IoT models. Next, we propose an abstract model for smart homes and cities to illustrate the heterogeneity and complexity of network structure. Our initial results show that the heterogeneity of the protocols has a direct effect on the IoT and smart city resilience. As the next step, we make a graph model from the proposed model and do graph theoretic analysis to recognize the fundamental behavior of the network to improve its robustness. We perform the process of improvement through modifying topology, adding extra nodes, and links when necessary. Finally, we will conduct various simulation studies on the model to validate its resilience.


VENKAT VADDULA - Content Analysis in Microblogging Communities

When & Where:

May 18, 2018 - 12:00 PM
2001B Eaton Hall

Committee Members:

Nicole Beckage, Chair
Jerzy Grzymala-Busse
Bo Luo

Abstract

People use online social networks like Twitter to communicate and discuss a variety of topics. This makes these social platforms an import source of information. In the case of Twitter, to make sense of this source of information, understanding the content of tweets is important in understanding what is being discussed on these social platforms and how ideas and opinions of a group are coalescing around certain themes. Although there are many algorithms to classify(identify) the topics, the restricted length of the tweets and usage of jargon, abbreviations and urls make it hard to perform without immense expertise in natural language processing. To address the need for content analysis in twitter that is easily implementable, we introduce two measures based on the term frequency to identify the topics in the twitter microblogging environment. We apply these measures to the tweets with hashtags related to the Pulse night club shooting in Orlando that happened on June 12, 2016. This event is branded as both terrorist attack and hate crime and different people on twitter tweeted about this event differently forming social network communities, making this a fitting domain to explore our algorithms ability to detect the topics of community discussions on twitter.  Using community detection algorithms, we discover communities in twitter. We then use frequency statistics and Monte Carlo simulation to determine the significance of certain hashtags. We show that this approach is capable of uncovering differences in community discussions and propose this method as a means to do community based content detection.


TEJASWINI JAGARLAMUDI - Community-based Content Analysis of the Pulse Night Club Shooting

When & Where:

May 18, 2018 - 9:30 AM
2001B Eaton Hall

Committee Members:

Nicole Beckage, Chair
Prasad Kulkarni
Fengjun Li

Abstract

On June 12, 2016, 49 people were killed and another 58 wounded in an attack at Pulse Nightclub in Orlando Florida. This incident was regarded as both hate crime against LGBT people and as a terrorist attack. This project focuses on analyzing tweets a week after the terrorist attack, specifically looking at how different communities within twitter were discussing this event. To understand how the twitter users in different communities are discussing this event, a set of hashtag frequency-based evaluation measures and simulations are proposed. The simulations are used to assess the specific hashtag content of a community. Using community detection algorithms and text analysis tools, significant topics that specific communities are discussing and  the topics that are being avoided by those communities are discovered.


NIHARIKA GANDHARI - A Comparative Study on Strategies of Rule Induction for Incomplete Data

When & Where:

May 14, 2018 - 11:30 AM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Perry Alexander
Bo Luo

Abstract

Rule Induction is one of the major applications of rough set theory. However, traditional rough set models cannot deal with incomplete data sets. Missing values can be handled by data pre-processing or extension of rough set models. Two data pre-processing methods and one extension of the rough set model are considered in this project. These being filling in missing values with most common data, ignoring objects by deleting records and extended discernibility matrix. The objective is to compare these methods in terms of stability and effectiveness. All three methods have same rule induction method and are analyzed based on test accuracy and missing attribute level percentage. To better understand the properties of these approaches, eight real-life data-sets with varying level of missing attribute values are used for testing. Based on the results, we discuss the relative merits of three approaches in an attempt to decide upon optimal approach. The final conclusion is that the best method is to use a pre-processing method which is filling in missing values with most common data.​


MADHU CHEGONDI - A Comparison of Leaving-one-out and Bootstrap

When & Where:

May 11, 2018 - 9:30 AM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang

Abstract

Recently machine learning has created significant advancements in many areas like health, finance, education, sports, etc. which has encouraged the development of many predictive models. In machine learning, we extract hidden, previously unknown, and potentially useful high-level information from low-level data. Cross-validation is a typical strategy for estimating the performance. It simulates the process of fitting to different datasets and seeing how different predictions can be. In this project, we review accuracy estimation methods and compare two resampling methods, such as leaving-one-out and bootstrap. We compared these validation methods using LEM1 rule induction algorithm. Our results indicate that for real-world datasets similar to ours, bootstrap may be optimistic.


PATRICK McCORMICK - Design and Optimization of Physical Waveform-Diverse and Spatially-Diverse Emissions

When & Where:

May 10, 2018 - 2:30 PM
129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Chris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*

Abstract

With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.

In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.

The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform


MATT KITCHEN - Blood Phantom Concentration Measurement Using An I-Q Receiver Design

When & Where:

May 10, 2018 - 1:00 PM
250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Jim Stiles

Abstract

Near-infrared spectroscopy has been used as a non-invasive method of determining concentration of chemicals within living tissues of living organisms.  This method employs LEDs of specific frequencies to measure concentration of blood constituents according to the Beer-Lambert Law.  One group of instruments (frequency domain instruments) is based on amplitude modulation of the laser diode or LED light intensity, the measurement of light adsorption and the measurement of modulation phase shift to determine light path length for use in Beer-Lambert Law. This paper describes the design and demonstration of a frequency domain instrument for measuring concentration of oxygenated and de-oxygenated hemoglobin using incoherent optics and an in-phase quadrature (I-Q) receiver design.  The design has been shown to be capable of resolving variations of concentration of test samples and a viable prototype for future, more precise, tools.

 


LIANYU LI - Wireless Power Transfer

When & Where:

May 10, 2018 - 11:00 AM
250 Nichols Hall

Committee Members:

Alessandro Salandrino, Chair
Reza Ahmadi
Ron Hui

Abstract

Wireless Power Transfer is commonly known as that electrical energy transfer from source to load in some certain distance without any wire connecting in between. It has been almost two hundred when people first noticed the electromagnetic induction phenomenon. After that, Nikola Tesla tried to use this concept to build the first wireless power transfer device. Today, the most common technic is used for transfer power wirelessly is known as inductive coupling. It has revolutionized the transmission of power in various application.  Wireless power transfer is one of the simplest and inexpensive way to transfer energy, and it will change the behavior of how people are going to use their devices.

With the development of science and technology. A new method of wireless power transfer through the coupled magnetic resonances could be the next technology that bring the future nearer. It significantly increases the transmission distance and efficiency. This project shows that this is very simple way to charge the low power devices wirelessly by using coupled magnetic resonances. It also presents how easy to set up the system compare to the conventional copper cables and current carrying wire.


TONG XU - Real-Time DSP Enabled Multi-Carrier Cross-Connect for Optical Systems

When & Where:

May 9, 2018 - 2:00 PM
246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Esam El-Araby
Erik Perrins
Hui Zhao*

Abstract

Owning to the ever-increasing data traffic in today’s optical network, how to utilize the optical bandwidth more efficiently has become a critical issue. Optical wavelength division multiplexing (WDM) multiplexes multiple optical carrier signals into a single fiber by using different wavelengths of laser light. Optical cross-connect (OXC) and switches based on optical WDM can greatly improves the performance of optical networks, which results in reduced complexity, signal transparency, and significant electrical energy saving. However, OXC alone cannot fully exploit the availability of optical bandwidth due to its coarse bandwidth granularity imposed by optical filtering. Thus, OXC may not meet the requirements of some applications when the sub-band has a small bandwidth. In order to achieve smaller bandwidth granularities, electrical digital cross-connect (DXC) could be added to the current optical network.

In this work, we proposed a scheme of real-time digital signal processing (DSP) enabled multi-carrier cross-connect which can dynamically assign bandwidth and allocates power to each individual subcarrier channel. This cross-connect is based on digital sub-carrier multiplexing (DSCM), which is a frequency division multiplexing (FDM) technique. Either Nyquist WDM (N-WDM) or orthogonal frequency division multiplexing (OFDM) can be used to implement real-time enabled DSCM. DSCM multiplexes the digital created subcarriers on each optical wavelength. Compared with optical WDM, DSCM has a smaller bandwidth granularity because it multiplexes sub-carriers in electrical domain. DSCM also provides more flexibility since operations such as distortion compensation and signal regeneration could be conducted by using DSP algorithms.

We built a real-time DSP platform based on a Virtex7 FPGA, which allows the test of real-time DSP algorithms for multi-carrier cross-connect in optical systems. We have implemented a real-time DSP enabled multi-carrier cross-connect based on up/down sampling and filtering. This technique can save the DSP resources since local oscillators (LO) are not needed in spectral translation. We got some preliminary results from theoretical analysis, simulation and experiment. The performance and resource cost of this cross-connect has been analyzed. This real-time DSP enabled cross-connect also has the potential to reduce the cost in applications such as the mobile Fronthaul in 5G next-generation wireless networks.

 


RAHUL KAKANI - Discretization Based on Entropy and Multiple Scanning

When & Where:

May 9, 2018 - 10:30 AM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Man Kong
Prasad Kulkarni

Abstract

Enormous amount of data is being generated due to advancement in technology. The basic question of discovering knowledge from the data generated is still pertinent. Data mining guides us in discovering patterns or rules. Rules are frequently identified by a technique known as rule induction, which is regarded as the benchmark technique in data mining primarily developed to handle symbolic data. Real life data often consists of numerical attributes and hence, in order to completely utilize the power of rule induction, a form of preprocessing step is involved which converts numeric data into symbolic data known as discretization.

We present two entropy-based discretization techniques known as dominant attribute and multiple scanning approach, respectively. These approaches were implemented as two explicit algorithms in C# programming language and applied on nine well known numerical data sets. For every dataset in multiple scanning approach, experiment was repeated with incremental scans until interval counts were stable. Preliminary results suggest that multiple scanning approach performs better than dominant attribute approach in terms of producing comparatively smaller and simpler error rate.

 


SHADI PIR HOSSEINLOO - Supervised Speech Separation Based on Deep Neural Network

When & Where:

May 7, 2018 - 1:15 PM
317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Jonathan Brumbergm Co-Chair
Erik Perrins
Dave Petr
John Hansen

Abstract

In real world environments, the speech signals received by our ears are usually a combination of different sounds that include not only the target speech, but also acoustic interference like music, background noise, and competing speakers. This interference has negative effect on speech perception and degrades the performance of speech processing applications such as automatic speech recognition (ASR), and hearing aid devices. One way to solve this problem is using source separation algorithms to separate the desired speech from the interfering sounds. Many source separation algorithms have been proposed to improve the performance of ASR systems and hearing aid devices, but it is still challenging for these systems to work efficiently in noisy and reverberant environments. On the other hand, humans have a remarkable ability to separate desired sounds and listen to a specific talker among noise and other talkers. Inspired by the capabilities of human auditory system, a popular method known as auditory scene analysis (ASA) was proposed to separate different sources in a two stage process of segmentation and grouping. The main goal of source separation in ASA is to estimate time frequency masks that optimally match and separate noise signals from a mixture of speech and noise. Three major aims are proposed to improve upon source separation in noisy and reverberant acoustic signals. First, a simple and novel algorithm is proposed to increase the discriminability between two sound sources by magnifying the head-related transfer function of the interfering source. Experimental results show a significant increase in the quality of the recovered target speech. Second, a time frequency masking-based source separation algorithm is proposed that can separate a male speaker from a female speaker in reverberant conditions by using the spatial cues of the sources. Furthermore, the proposed algorithm also has the ability to preserve the location of the sources after separation.

Finally, a supervised speech separation algorithm is proposed based on deep neural networks to estimate the time frequency masks. Initial experiments show promising results for separating sources in noisy and reverberant condition. Continued work is focused on identifying the best network training features and network structure that are robust to different types of noise, speakers, and reverberation. The main goal of the proposed algorithm is to increase the intelligibility and quality of the recovered speech from noisy environments, which has the potential to improve both speech processing applications and signal processing strategies for hearing aid technology.


CHENG GAO - Mining Incomplete Numerical Data Sets

When & Where:

May 7, 2018 - 12:00 PM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Richard Wang
Tyrone Duncan
Xuemin Tu*

Abstract

Incomplete and numerical data are common for many application domains. There have been many approaches to handle missing data in statistical analysis and data mining. To deal with numerical data, discretization is crucial for many machine learning algorithms. However, few work has been done for discretization on incomplete data.

This research mainly focuses on the question whether conducting discretization as preprocessing gives better results than using a data mining method alone. Multiple Scanning is an entropy based discretization method. Previous research shown that it outperforms commonly used discretization methods: Equal Width or Equal Frequency discretization. In this work, Multiple Scanning is tested on C4.5 and MLEM2 on in- complete numerical data sets. Results show for some data sets, the setup utilizing Multiple Scanning as preprocessing performs better, for the other data sets, C4.5 or MLEM2 should be used by themselves. Our secondary objective is to test which of the three known interpretations of missing attribute value is better when using MLEM2. Results show that running MLEM2 on data sets with attribute-concept values performs worse than the other two types of missing values. Last, we compared error rate be- tween C4.5 combined with Multiple Scanning (MS-C4.5) and MLEM2 combined with Multiple Scanning (MS-MLEM2) on data sets with different percentage of missing at- tribute values. Possible rules induced by MS-MLEM2 give a better result on data sets with "do-not-care" conditions. MS-C4.5 is preferred on data sets with lost values and attribute-concept values.

Our conclusion is that there are no universal optimal solutions for all data sets. Setup should be custom-made based on the data sets.

 


GOVIND VEDALA - Digital Compensation of Transmission Impairments in Multicarrier Fiber Optic Systems

When & Where:

May 7, 2018 - 10:00 AM
246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Erik Perrins
Alessandro Salandrino
Carey Johnson*

Abstract

Time and again, fiber optic medium has proved to be the best means for transporting global data traffic which is following an exponential growth trajectory. High bandwidth applications based on cloud, virtual reality and big data, necessitates maximum effective utilization of available fiber bandwidth. To this end, multicarrier superchannel transmission systems, aided by robust digital signal processing both at transmitter and receiver, have proved to enhance spectral efficiency and achieve multi tera-bit per second data rates.

With respect to transmission sources, laser technology too has made significant strides, especially in the domain of multiwavelength sources such as quantum dot passive mode-locked laser (QD-PMLL) based optical frequency combs. In the present research work, we characterize the phase dynamics of comb lines from a QD-PMLL based on a novel multiheterodyne coherent detection technique. The inherently broad linewidth of comb lines which is in the order of tens of MHz, make it difficult for conventional digital phase noise compensation algorithms to track the large phase noise especially for low baud rate subcarriers using higher cardinality modulation formats. In the context of multi-subcarrier Nyquist pulse shaped superchannel transmission system with coherent detection, we demonstrate through measurements, an efficient phase noise compensation technique called “Digital Mixing” which exploits the mutual phase coherence among the comb lines. For QPSK and 16 QAM modulation formats, digital mixing provided significant improvement in bit error rate (BER) performance.  For short reach data center and passive optical network-based applications, which adopt direct detection, a single optical amplifier is generally used meet the power budget requirements to achieve the desired BER.  Semiconductor Optical Amplifier (SOA) with its small form factor, is a low-cost power booster that can be designed to operate in any desired wavelength and most importantly can be integrated with the transmitter. However, saturated SOAs introduce nonlinear distortions on the amplified signal. Alongside SOA, the photodiode also introduces nonlinear mixing in the form of Signal-Signal Beat Interference (SSBI). In this research, we study the impact of SOA nonlinearity on the effectiveness of SSBI compensation in a direct detection OFDM based transmission system. We experimentally demonstrate a digital compensation technique to undo the SOA nonlinearity effect by digitally back-propagating the received signal through a virtual SOA, thereby effectively eliminating the SSBI. ​


VENKAT ANIRUDH YERRAPRAGADA - Comparison of Minimum Cost Perfect Matching Algorithms in solving the Chinese Postman Problem

When & Where:

May 4, 2018 - 1:00 PM
2001B Eaton Hall

Committee Members:

Man Kong, Chair
Perry Alexander
Jerzy Grzymala-Busse

Abstract

The Chinese Postman Problem also known as Route Inspection Problem is a famous arc routing problem in Graph theory. In this problem, a postman has to deliver mail to the streets such that all the streets are visited at least once and return to his starting point. The problem is to find out a path called the optimal postman tour such that the distance travelled by the postman by following this path is always the minimum distance that has to be travelled to visit all the streets at least once. In graph theory, we represent the street system as a weighted graph whose edges represent the streets and the street intersections are represented by the vertices. A graph can be directed, undirected or a mixed graph. Directed and undirected edges represent the one way and the two way streets respectively. A mixed graph has both the directed and undirected edges.

The Chinese postman problem can be divided into several sub problems of which finding the minimum cost perfect matching is the critical part. For a directed graph, the minimum cost perfect matching of a bipartite graph has to be computed. For an undirected graph, the minimum cost perfect matching of a general graph has to be computed. There are different matching algorithms to compute the minimum cost perfect matching efficiently. In this project, I have understood and implemented four different matching algorithms used in computing an optimal postman tour, the Edmond’s Blossom Algorithm and a Branch and Bound Algorithm for the directed graph and the Hungarian Algorithm and a Branch and Bound Algorithm for the undirected graph. The objective of this project is to compare the performance of these matching algorithms on graphs of different sizes and densities."


SRI MOUNICA MOTIPALLI - Analysis of Privacy Protection Mechanisms in Social Networks using the Social Circle Model

When & Where:

May 2, 2018 - 11:00 AM
2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Perry Alexander
Jerzy Grzymala-Busse

Abstract

Many online social networks are increasingly being used as information sharing platforms. With a massive increase in the number of users participating in information sharing, an enormous amount of information becomes available on such sites. It is vital to preserve user’s privacy, without preventing them from socialization. Unfortunately, many existing models overlooked a very important fact, that is, a user may want different information boundary preference for different information. To address this short coming, in this paper, I will introduce a ‘social circle’ model, which follows the concepts of ‘private information boundaries’ and ‘restricted access and limited control’. While facilitating socialization, the social circle model also provides some privacy protection capabilities. I then utilize this model to analyze the most popular social networks (such as Facebook, Google+, VKontakte, Flickr, and Instagram) and demonstrate the potential privacy vulnerabilities in some of these networking sites. Lastly, I discuss the implication of the analysis and possible future directions. 


PEGAH NOKHIZ - Understanding User Behavior in Social Networks Using Quantified Moral Foundations

When & Where:

May 2, 2018 - 9:00 AM
246 Nichols Hall

Committee Members:

Fengjun Li, Chair
Bo Luo
Cuncong Zhong

Abstract

Moral inclinations expressed in user-generated content such as online reviews or tweets can provide useful insights to understand users’ behavior and activities in social networks, for example, to predict users’ rating behavior, perform customer feedback mining, and study users' tendency to spread abusive content on these social platforms.  In this work, we want to answer two important research questions. First, if the moral attributes of social network data can provide additional useful information about users' behavior and how to utilize this information to enhance our understanding. To answer this question, we used the Moral FoundationsTheory and Doc2Vec, a Natural Language Processing technique, to compute the quantified moral loadings of user-generated textual contents in social networks. We used conditional relative frequency and the correlations between the moral foundations as two measures to study the moral break down of the social network data, utilizing a dataset of Yelp reviews and a dataset of tweets on abusive user-generated content. Our findings indicated that these moral features are tightly bound with users' behavior in social networks. The second question we want to answer is if we can use the quantified moral loadings as new boosting features to improve the differentiation, classification, and prediction of social network activities. To test our hypothesis, we adopted our new moral features in a multi-class classification approach to distinguish hateful and offensive tweets in a labeled dataset, and compared with the baseline approach that only uses conventional text mining features such as tf-idf features, Part of Speech (PoS) tags, etc. Our findings demonstrated that the moral features improved the performance of the baseline approach in terms of precision, recall, and F-measure.​


MUSTAFA AL-QADI - Laser Phase Noise and Performance of High-Speed Optical Communication Systems

When & Where:

April 30, 2018 - 10:00 AM
2001B Eaton Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Victor Frost
Erik Perrins
Jie Han*

Abstract

The non-ending growth of data traffic resulting from the continuing emergence of high-data-rate-demanding applications sets huge capacity requirements on optical interconnects and transport networks. This requires optical communication schemes in these networks to make the best possible use of the available optical spectrum per a single optical channel to enable transmission of multiple tens of tera-bits per second per a single fiber core in high capacity transport networks. Therefore, advanced modulation formats are required to be used in conjunction with energy-efficient and robust transceiver schemes. Important challenges facing these goals are the stringent requirements on the characteristics of optical components comprising these systems. Especially the laser sources. Laser phase noise is one of the most important performance-limiting factors in systems with high spectral efficiency. In this research work, we study the effects of different laser phase noise characteristics on the performance of different optical communication schemes. A novel, simple and accurate phase noise characterization technique is proposed. Experimental results show that the proposed technique is very accurate in estimating the performance of lasers in coherent systems employing digital phase recovery techniques. A novel multi-heterodyne scheme for characterizing the phase noise of laser frequency comb sources is also proposed and validated by experimental results. This proposed scheme is the first one of its type capable of measuring the differential phase noise between multiple spectral lines instantaneously by a single measurement. Moreover, extended relations between system performance and detailed characteristics of laser phase noise are also analyzed and modeled. The results of this study show that the commonly-used metric to estimate the performance of lasers with a specific phase recovery scheme, linewidth-symbol-period product, is not necessarily accurate for all types of lasers, and description of FM-noise power spectral profile is required for accurate performance estimation. We also propose an energy- and cost-efficient transmission scheme suitable for metro and long-reach data-center-interconnect links based on direct detection of field-modulated optical signals with advanced modulation formats, allowing for higher spectral efficiency. The proposed system combines the Kramers-Kronig coherent receiver technique, with the use of quantum-dot multi-mode laser sources, to generate and transmit multi-channel optical signals using a single diode laser source. Experimental results of the proposed system show that high modulation formats can be employed, with high robustness against laser phase noise and frequency drifting.


MARK GREBE - Domain Specific Languages for Small Embedded Systems

When & Where:

April 27, 2018 - 10:30 AM
250 Nichols Hall

Committee Members:

Andy Gill, Chair
Perry Alexander
Prasad Kulkarni
Suzanne Shontz
Kyle Camarda

Abstract

Resource limited embedded systems provide a great challenge to programming using functional languages.  Although these embedded systems cannot be programmed directly with Haskell, I show that an embedded domain specific language is able to be used to program them, and provides a user friendly environment for both prototyping and full development.  The Arduino line of microcontroller boards provide a versatile, low cost and popular platform for development of these resource limited systems, and I use these boards as the platform for my DSL research.

First, I provide a shallowly embedded domain specific language, and a firmware interpreter, allowing the user to program the Arduino while tethered to a host computer.  Shallow EDSLs allow a programmer to program using many of the features of a host language and its syntax, but sacrifice performance.  Next, I add a deeply embedded version, allowing the interpreter to run standalone from the host computer, as well as allowing the code to be compiled to C and then machine code for efficient operation.   Deep EDSLs provide better performance and flexibility, through the ability to manipulate the abstract syntax tree of the DSL program, but sacrifice syntactical similarity to the host language.   Using Haskino, my EDSL designed for Arduino microcontrollers, and a compiler plugin for the Haskell GHC compiler, I show a method for combining the best aspects of shallow and deep EDSLs. The programmer is able to write in the shallow EDSL, and have it automatically transformed into the deep EDSL.  This allows the EDSL user to benefit from powerful aspects of the host language, Haskell, while meeting the demanding resource constraints of the small embedded processing environment.

 


ALI ABUSHAIBA - Extremum Seeking Maximum Power Point Tracking for a Stand-Alone and Grid-Connected Photovoltaic Systems

When & Where:

April 26, 2018 - 11:00 AM
Room 1 Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Ken Demarest
Glenn Prescott
Alessandro Salandrino
Prajna Dhar*

Abstract

Energy harvesting from solar sources in an attempt to increase efficiency has sparked interest in many communities to develop more energy harvesting applications for renewable energy topics. Advanced technical methods are required to ensure the maximum available power is harnessed from the photovoltaic (PV) system. This dissertation proposed a new discrete-in-time extremum-seeking (ES) based technique for tracking the maximum power point of a photovoltaic array. The proposed method is a true maximum power point tracker that can be implemented with reasonable processing effort on an expensive digital controller. The dissertation presents a stability analysis of the proposed method to guarantee the convergence of the algorithm.

Two types of PV systems were designed and comprehensive frame work of control design was considered for a stand-alone and a three-phase grid connected system.

Grid-tied systems commonly have a two-stage power electronics interface which is necessitated due to the inherent limitation of the DC-AC (Inverter) power converging stage. However, a one stage converter topology, denoted as Quasi-Z-source inverter (q-ZSI) was selected that interface the PV panel which overcomes the inverter limitations to harvest the maximum available power.

A powerful control scheme called Model Predictive Control with Finite Set (MPC-FS) was designed to control the grid connected system. The predictive control was selected to achieve a robust controller with superior dynamic response in conjunction with the extremum-seeking algorithm to enhance the system behavior.

The proposed method exhibited better performance in comparison to conventional Maximum Power Point Tracking (MPPT) methods and require less computational effort than the complex mathematical methods.​


JUSTIN DAWSON - The Remote Monad

When & Where:

April 23, 2018 - 11:00 AM
246 Nichols Hall

Committee Members:

Andy Gill, Chair
Perry Alexander
Prasad Kulkarni
Bo Luo
Kyle Camarda

Abstract

Remote Procedure Calls are an integral part of the internet of things and cloud computing. However, remote procedures, by their very nature, have an expensive overhead cost of a network round trip. There have been many optimizations to amortize the network overhead cost, including asynchronous remote calls and batching requests together.

In this dissertation, we present a principled way to batch procedure calls together, called the Remote Monad. The support for monadic structures in languages such as Haskell can be utilized to build a staging mechanism for chains of remote procedures. Our specific formulation of remote monads uses natural transformations to make modular and composable network stacks which can automatically bundle requests into packets by breaking up monadic actions into ideal packets. By observing the properties of these primitive operations, we can leverage a number of tactics to maximize the size of the packets.

We have created a framework which has been successfully used to implement the industry standard JSON-RPC protocol, a graphical browser-based library, an efficient byte string implementation, a library to communicate with an Arduino board and database queries all of which have automatic bundling enabled. We demonstrate that the result of this investigation is that the cost of implementing bundling for remote monads can be amortized almost for free, when given a user-supplied packet transportation mechanism.


JOSEPH St AMAND - Learning to Measure: Distance Metric Learning with Structured Sparsity

When & Where:

April 13, 2018 - 12:15 PM
246 Nichols Hall

Committee Members:

Arvin Agah, Chair
Prasad Kulkarni
Jim Miller
Richard Wang
Bozenna Pasik-Duncan*

Abstract

Many important machine learning and data mining algorithms rely on a measure to provide a notion of distance or dissimilarity. Naive metrics such as the Euclidean distance are incapable of leveraging task-specific information, and consider all features as equal. A learned distance metric can become much more effective by honing in on structure specific to a task. Additionally, it is often extremely desirable for a metric to be sparse, as this vastly increases the ability to interpret the distance metric. In this dissertation, we explore several current problems in distance metric learning and put forth solutions which make use of structured sparsity.

The first contribution of this dissertation begins with a classic approach in distance metric learning and address a scenario where distance metric learning is typically inapplicable, i.e., the case of learning on heterogeneous data in a high-dimensional input space. We construct a projection-free distance metric learning algorithm which utilizes structured sparse updates and successfully demonstrate its application to learn a metric with over a billion parameters.

The second contribution of this dissertation focuses on an intriguing regression-based approach to distance metric learning. Under this regression approach there are two sets of parameters to learn; those which parameterize the metric, and those defining the so-called ``virtual points''. We begin with an exploration of the metric parameterization and develop a structured sparse approach to robustify the metric to noisy, corrupted, or irrelevant data. We then focus on the virtual points and develop a new method for learning the metric and constraints together in a simultaneous manner. It is demonstrate through empirical means that our approach results in a distance metric which is more effective than the current state of-the-art.

Machine learning algorithms have recently become ingrained in an incredibly diverse amount of technology. The focus of this dissertation is to develop more effective techniques to learn a distance metric. We believe that this work has the potential for broad-reaching impacts, as learning a more effective metric typically results in more accurate metric-based machine learning algorithms.

 


SHIVA RAMA VELMA - An Implementation of the LEM2 Algorithm Handling Numerical Attributes

When & Where:

February 28, 2018 - 9:30 AM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse , Chair
Perry Alexander
Prasad Kulkarni

Abstract

Data mining is a computing process of finding meaningful patterns in large sets of data. These patterns are then analyzed and used to make predictions for the future. One form of data mining is to extract rules from data sets. There are various rule induction algorithms, such as LEM1 (Learning from Examples Module Version 1), LEM2 (Learning from Examples Module Version 2) and MLEM2(Modified Learning from Examples Module Version 2). Most of the rule induction algorithms require the input data with only discretized attributes. If the input data contains numerical attributes, we need to convert them into discrete values (intervals) before performing rule induction, this process is called discretization. In this project, we discuss an implementation of LEM2 which generates the rules from data with numerical and symbolic attributes. The accuracy of the rules generated by LEM2 is measured by computing the error rate by a program called rule checker using ten-fold cross-validation and holdout methods. ​


SURYA NIMMAKAYALA - Heuristics to Predict and Eagerly Translate Code in DBTs

When & Where:

February 14, 2018 - 9:00 AM
250 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Fengjun Li
Bo Luo
Shawn Keshmiri*

Abstract

Dynamic Binary Translators(DBTs) have a variety of uses, like instrumentation, profiling, security, portability, etc. In order for the desired application to run with these enhanced additional features(not originally part of its design), it is to be run under the control of Dynamic Binary Translator. The application can be thought of as the guest application, to be run with in a controlled environment of the translator, which would be the host application. That way, the intended application execution flow can be enforced by the translator, thereby inducing the desired behavior in the application on the host platform(combination of Operating System and Hardware). Depending on the implementation of the translator(host application), the guest application can either have code compiled for the host platform, or a different platform. It would be the responsibility of the translator to make appropriate code/binary translation of the guest application code, to be run on the host platform.

However, there will be a run-time/execution-time overhead in the translator, when performing the additional tasks to run the guest application in a controlled fashion. This run-time overhead has been limiting the usage of DBT's on a large scale, where response times can be critical. There is often a trade-off between the benefits of using a DBT against the overall application response time. So, there is a need to research/explore ways of faster application execution through DBT's(given their large code-base).

With the evolution of the multi-core and GPU hardware architectures, paralleization of software can be employed through multiple threads, which can concurrently run parts of code and potentially doing more work at the same time. The proper design of parallel applications or parallelizing parts of existing code, can lead to faster application run-time's, by taking advantage of the hardware architecture support to parallel programs.

We explore the possibility of improving the performance of a DBT named DynamoRIO. The basic idea is to improve its performance by speeding-up the process of guest code translation, through multiple threads translating multiple pieces of code concurrently. In an ideal case, all the required code blocks for application execution would be available ahead of time(eager translation), without any wait/overhead at run-time, and also giving it the enhanced features through the DBT. For efficient run-time eager translation there is also a need for heuristics, to better predict the next likely code block to be executed. That could potentially bring down the less productive code translations at run-time. The goal is to get application speed-up through eager translation, coupled with block prediction heuristics, leading to an execution time close to that of native run.


PATRICK McCORMICK - Design and Optimization of Physical Waveform-Diverse Emissions

When & Where:

January 29, 2018 - 12:30 PM
246 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Chris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*

Abstract

With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.

In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.

The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform.


RAKESH YELLA - A Comparison of Two Decision Tree Generating Algorithms CART and Modified ID3

When & Where:

January 29, 2018 - 10:30 AM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Man Kong
Prasad Kulkarni

Abstract

In Data mining, Decision Tree is a type of classification model which uses a tree-like data structure to organize the data to obtain meaningful information. We may use Decision Tree for important predictive analysis in data mining. 

In this project, we compare two decision tree generating algorithms CART and the modified ID3 algorithm using different datasets with discrete and continuous numerical values. A new approach to handle the continuous numerical values is implemented in this project since the basic ID3 algorithm is inefficient in handling the continuous numerical values. In the modified ID3 algorithm, we discretize the continuous numerical values by creating cut-points. The decision trees generated by the modified algorithm contain fewer nodes and branches compared to basic ID3. 

The results from the experiments indicate that there is statistically insignificant difference between CART and modified ID3 in terms of accuracy on test data. On the other hand, the size of the decision tree generated by CART is smaller than the decision tree generated by modified ID3. 


SRUTHI POTLURI - A Web Application for Recommending Movies to Users

When & Where:

January 26, 2018 - 11:00 AM
2001B Eaton hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Man Kong
Bo Luo

Abstract

Recommendation systems are becoming more and more important with increasing popularity of e-commerce platforms. An ideal recommendation system recommends preferred items to the user. In this project, an algorithm named item-item collaborative filtering is implemented as premise. The recommendations are smarter by going through movies similar to the movies of different ratings by the user, calculating predictions and recommending those movies which have high predictions. The primary goal of the proposed recommendation algorithm is to include user’s preference and to include lesser known items in recommendations. The proposed recommendation system was evaluated on basis of Mean Absolute Error(MAE) and Root Mean Square Error(RMSE) against 1 Million movie rating involving 6040 users and 3900 movies. The implementation is made as a web-application to simulate the real-time experience for the user.  


DEBABRATA MAJHI - IRIM: Interesting Rule Induction Module with Handling Missing Attribute Values

When & Where:

January 24, 2018 - 11:00 AM
2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo

Abstract

In the current era of big data, huge amount of data can be easily collected, but the unprocessed data is not useful on its own. It can be useful only when we are able to find interesting patterns or hidden knowledge. The algorithm to find interesting patterns is known as Rule Induction Algorithm. Rule induction is a special area of data mining and machine learning in which formal rules are extracted from a dataset. The extracted rules may represent some general or local (isolated) patterns related to the data.
In this report, we will focus on the IRIM (Interesting Rule Inducing Module) which induces strong interesting rules that covers most of the concept. Usually, the rules induced by IRIM provides interesting and surprising insight to the expert in the domain area.
The IRIM algorithm was implemented using Python and pySpark library, which is specially customize for data mining. Further, the IRIM algorithm was extended to handle the different types of missing data. Then at the end the performance of the IRIM algorithm with and without missing data feature was analyzed. As an example, interesting rules induced from IRIS dataset are shown.

 


SUSHIL BHARATI - Vision Based Adaptive Obstacle Detection, Robust Tracking and 3D Reconstruction for Autonomous Unmanned Aerial Vehicles

When & Where:

January 22, 2018 - 11:00 AM
246 Nichols Hall

Committee Members:

Richard Wang, Chair
Bo Luo
Suzanne Shontz

Abstract

Vision-based autonomous navigation of UAVs in real-time is a very challenging problem, which requires obstacle detection, tracking, and depth estimation. Although the problems of obstacle detection and tracking along with 3D reconstruction have been extensively studied in computer vision field, it is still a big challenge for real applications like UAV navigation. The thesis intends to address these issues in terms of robustness and efficiency. First, a vision-based fast and robust obstacle detection and tracking approach is proposed by integrating a salient object detection strategy within a kernelized correlation filter (KCF) framework. To increase its performance, an adaptive obstacle detection technique is proposed to refine the location and boundary of the object when the confidence value of the tracker drops below a predefined threshold. In addition, a reliable post-processing technique is implemented for an accurate obstacle localization. Second, we propose an efficient approach to detect the outliers present in noisy image pairs for the robust fundamental matrix estimation, which is a fundamental step for depth estimation in obstacle avoidance. Given a noisy stereo image pair obtained from the mounted stereo cameras and initial point correspondences between them, we propose to utilize reprojection residual error and 3-sigma principle together with robust statistic based Qn estimator (RES-Q) to efficiently detect the outliers and accurately estimate the fundamental matrix. The proposed approaches have been extensively evaluated through quantitative and qualitative evaluations on a number of challenging datasets. The experiments demonstrate that the proposed detection and tracking technique significantly outperforms the state-of-the-art methods in terms of tracking speed and accuracy, and the proposed RES-Q algorithm is found to be more robust than other classical outlier detection algorithms under both symmetric and asymmetric random noise assumptions.


MOHSEN ALEENEJAD - New Modulation Methods and Control Strategies for Power Electronics Inverters

When & Where:

January 19, 2018 - 3:00 PM
1 Eaton Hall

Committee Members:

Reza Ahmadi, Chair
Glenn Prescott
Alessandro Salandrino
Jim Stiles
Huazhen Fang*

Abstract

The DC to AC power Converters (so-called Inverters) are widely used in industrial applications. The multilevel inverters are becoming increasingly popular in industrial apparatus aimed at medium to high power conversion applications.  In comparison to the conventional inverters, they feature superior characteristics such as lower total harmonic distortion (THD), higher efficiency, and lower switching voltage stress.  Nevertheless, the superior characteristics come at the price of a more complex topology with an increased number of power electronic switches. The increased number of power electronics switches results in more complicated control strategies for the inverter. Moreover, as the number of power electronic switches increases, the chances of fault occurrence of the switches increases, and thus the inverter’s reliability decreases. Due to the extreme monetary ramifications of the interruption of operation in commercial and industrial applications, high reliability for power inverters utilized in these sectors is critical.  As a result, developing simple control strategies for normal and fault-tolerant operation of multilevel inverters has always been an interesting topic for researchers in related areas.  The purpose of this dissertation is to develop new control and fault-tolerant strategies for the multilevel power inverter.  For the normal operation of the inverter, a new high switching frequency technique is developed.  The proposed method extends the utilization of the dc link voltage while minimizing the dv/dt of the switches. In the event of a fault, the line voltages of the faulty inverters are unbalanced and cannot be applied to the three phase loads. For the faulty condition of the inverter, three novel fault-tolerant techniques are developed. The proposed fault-tolerant strategies generate balanced line voltages without bypassing any healthy and operative inverter element, makes better use of the inverter capacity and generates higher output voltage. These strategies exploit the advantages of the Selective Harmonic Elimination (SHE) and Space Vector Modulation (SVM) methods in conjunction with a slightly modified Fundamental Phase Shift Compensation (FPSC) technique to generate balanced voltages and manipulate voltage harmonics at the same time.  The proposed strategies are applicable to several classes of multilevel inverters with three or more voltage levels.


XIAOLI LI - Constructivism Learning

When & Where:

January 19, 2018 - 1:00 PM
246 Nichols Hall

Committee Members:

Luke Huan, Chair
Victor Frost
Bo Luo
Richard Wang
Alfred Ho*

Abstract

Aiming to achieve the learning capabilities possessed by intelligent beings, especially human, researchers in machine learning field have the long-standing tradition of borrowing ideas from human learning, such as reinforcement learning, active learning, and curriculum learning.  Motivated by a philosophical theory called  "constructivism", in this work, we propose a new machine learning paradigm, constructivism learning.   The constructivism theory has had wide-ranging impact on various human learning theories about how human acquire knowledge.  To adapt this human learning theory to the context of machine learning, we first studied how to improve leaning performance by exploring inductive bias or prior knowledge from multiple learning tasks with multiple data sources, that is multi-task multi-view learning, both in offline and lifelong setting.  Then we formalized a Bayesian nonparametric approach using sequential Dirichlet Process Mixture Models to support constructivism learning.  To further exploit constructivism learning, we also developed a constructivism deep learning method utilizing Uniform Process Mixture Models.


Degree Programs
 
 
 
 
 
Explore: Disciplines
Department Events
KU Today
High school seniors can apply to the SELF Program, a four-year enrichment and leadership experience
Engineering students build concrete canoes, Formula race cars, unmanned planes, and rockets for competitions nationwide
More first and second place awards in student AIAA aircraft design contests than any other school in the world
One of 34 U.S. public institutions in the prestigious Association of American Universities
44 nationally ranked graduate programs.
—U.S. News & World Report
Top 50 nationwide for size of library collection.
—ALA
23rd nationwide for service to veterans —"Best for Vets," Military Times