Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Andrew Riachi

An Investigation Into The Memory Consumption of Web Browsers and A Memory Profiling Tool Using Linux Smaps

When & Where:


Nichols Hall, Room 250 (Gemini Conference Room)

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Drew Davidson
Heechul Yun

Abstract

Web browsers are notorious for consuming large amounts of memory. Yet, they have become the dominant framework for writing GUIs because the web languages are ergonomic for programmers and have a cross-platform reach. These benefits are so enticing that even a large portion of mobile apps, which have to run on resource-constrained devices, are running a web browser under the hood. Therefore, it is important to keep the memory consumption of web browsers as low as practicable.

In this thesis, we investigate the memory consumption of web browsers, in particular, compared to applications written in native GUI frameworks. We introduce smaps-profiler, a tool to profile the overall memory consumption of Linux applications that can report memory usage other profilers simply do not measure. Using this tool, we conduct experiments which suggest that most of the extra memory usage compared to native applications could be due the size of the web browser program itself. We discuss our experiments and findings, and conclude that even more rigorous studies are needed to profile GUI applications.


Past Defense Notices

Dates

PATRICK McCORMICK

Design and Optimization of Physical Waveform-Diverse and Spatially-Diverse Emissions

When & Where:


129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Chris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*

Abstract

With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.

In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.

The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform


MATT KITCHEN

Blood Phantom Concentration Measurement Using An I-Q Receiver Design

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Jim Stiles


Abstract

Near-infrared spectroscopy has been used as a non-invasive method of determining concentration of chemicals within living tissues of living organisms.  This method employs LEDs of specific frequencies to measure concentration of blood constituents according to the Beer-Lambert Law.  One group of instruments (frequency domain instruments) is based on amplitude modulation of the laser diode or LED light intensity, the measurement of light adsorption and the measurement of modulation phase shift to determine light path length for use in Beer-Lambert Law. This paper describes the design and demonstration of a frequency domain instrument for measuring concentration of oxygenated and de-oxygenated hemoglobin using incoherent optics and an in-phase quadrature (I-Q) receiver design.  The design has been shown to be capable of resolving variations of concentration of test samples and a viable prototype for future, more precise, tools.

 


LIANYU LI

Wireless Power Transfer

When & Where:


250 Nichols Hall

Committee Members:

Alessandro Salandrino, Chair
Reza Ahmadi
Ron Hui


Abstract

Wireless Power Transfer is commonly known as that electrical energy transfer from source to load in some certain distance without any wire connecting in between. It has been almost two hundred when people first noticed the electromagnetic induction phenomenon. After that, Nikola Tesla tried to use this concept to build the first wireless power transfer device. Today, the most common technic is used for transfer power wirelessly is known as inductive coupling. It has revolutionized the transmission of power in various application.  Wireless power transfer is one of the simplest and inexpensive way to transfer energy, and it will change the behavior of how people are going to use their devices.

With the development of science and technology. A new method of wireless power transfer through the coupled magnetic resonances could be the next technology that bring the future nearer. It significantly increases the transmission distance and efficiency. This project shows that this is very simple way to charge the low power devices wirelessly by using coupled magnetic resonances. It also presents how easy to set up the system compare to the conventional copper cables and current carrying wire.


TONG XU

Real-Time DSP Enabled Multi-Carrier Cross-Connect for Optical Systems

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Esam El-Araby
Erik Perrins
Hui Zhao*

Abstract

Owning to the ever-increasing data traffic in today’s optical network, how to utilize the optical bandwidth more efficiently has become a critical issue. Optical wavelength division multiplexing (WDM) multiplexes multiple optical carrier signals into a single fiber by using different wavelengths of laser light. Optical cross-connect (OXC) and switches based on optical WDM can greatly improves the performance of optical networks, which results in reduced complexity, signal transparency, and significant electrical energy saving. However, OXC alone cannot fully exploit the availability of optical bandwidth due to its coarse bandwidth granularity imposed by optical filtering. Thus, OXC may not meet the requirements of some applications when the sub-band has a small bandwidth. In order to achieve smaller bandwidth granularities, electrical digital cross-connect (DXC) could be added to the current optical network.

In this work, we proposed a scheme of real-time digital signal processing (DSP) enabled multi-carrier cross-connect which can dynamically assign bandwidth and allocates power to each individual subcarrier channel. This cross-connect is based on digital sub-carrier multiplexing (DSCM), which is a frequency division multiplexing (FDM) technique. Either Nyquist WDM (N-WDM) or orthogonal frequency division multiplexing (OFDM) can be used to implement real-time enabled DSCM. DSCM multiplexes the digital created subcarriers on each optical wavelength. Compared with optical WDM, DSCM has a smaller bandwidth granularity because it multiplexes sub-carriers in electrical domain. DSCM also provides more flexibility since operations such as distortion compensation and signal regeneration could be conducted by using DSP algorithms.

We built a real-time DSP platform based on a Virtex7 FPGA, which allows the test of real-time DSP algorithms for multi-carrier cross-connect in optical systems. We have implemented a real-time DSP enabled multi-carrier cross-connect based on up/down sampling and filtering. This technique can save the DSP resources since local oscillators (LO) are not needed in spectral translation. We got some preliminary results from theoretical analysis, simulation and experiment. The performance and resource cost of this cross-connect has been analyzed. This real-time DSP enabled cross-connect also has the potential to reduce the cost in applications such as the mobile Fronthaul in 5G next-generation wireless networks.

 


RAHUL KAKANI

Discretization Based on Entropy and Multiple Scanning

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Man Kong
Prasad Kulkarni


Abstract

Enormous amount of data is being generated due to advancement in technology. The basic question of discovering knowledge from the data generated is still pertinent. Data mining guides us in discovering patterns or rules. Rules are frequently identified by a technique known as rule induction, which is regarded as the benchmark technique in data mining primarily developed to handle symbolic data. Real life data often consists of numerical attributes and hence, in order to completely utilize the power of rule induction, a form of preprocessing step is involved which converts numeric data into symbolic data known as discretization.

We present two entropy-based discretization techniques known as dominant attribute and multiple scanning approach, respectively. These approaches were implemented as two explicit algorithms in C# programming language and applied on nine well known numerical data sets. For every dataset in multiple scanning approach, experiment was repeated with incremental scans until interval counts were stable. Preliminary results suggest that multiple scanning approach performs better than dominant attribute approach in terms of producing comparatively smaller and simpler error rate.

 


SHADI PIR HOSSEINLOO

Supervised Speech Separation Based on Deep Neural Network

When & Where:


317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Jonathan Brumbergm Co-Chair
Erik Perrins
Dave Petr
John Hansen

Abstract

In real world environments, the speech signals received by our ears are usually a combination of different sounds that include not only the target speech, but also acoustic interference like music, background noise, and competing speakers. This interference has negative effect on speech perception and degrades the performance of speech processing applications such as automatic speech recognition (ASR), and hearing aid devices. One way to solve this problem is using source separation algorithms to separate the desired speech from the interfering sounds. Many source separation algorithms have been proposed to improve the performance of ASR systems and hearing aid devices, but it is still challenging for these systems to work efficiently in noisy and reverberant environments. On the other hand, humans have a remarkable ability to separate desired sounds and listen to a specific talker among noise and other talkers. Inspired by the capabilities of human auditory system, a popular method known as auditory scene analysis (ASA) was proposed to separate different sources in a two stage process of segmentation and grouping. The main goal of source separation in ASA is to estimate time frequency masks that optimally match and separate noise signals from a mixture of speech and noise. Three major aims are proposed to improve upon source separation in noisy and reverberant acoustic signals. First, a simple and novel algorithm is proposed to increase the discriminability between two sound sources by magnifying the head-related transfer function of the interfering source. Experimental results show a significant increase in the quality of the recovered target speech. Second, a time frequency masking-based source separation algorithm is proposed that can separate a male speaker from a female speaker in reverberant conditions by using the spatial cues of the sources. Furthermore, the proposed algorithm also has the ability to preserve the location of the sources after separation.

Finally, a supervised speech separation algorithm is proposed based on deep neural networks to estimate the time frequency masks. Initial experiments show promising results for separating sources in noisy and reverberant condition. Continued work is focused on identifying the best network training features and network structure that are robust to different types of noise, speakers, and reverberation. The main goal of the proposed algorithm is to increase the intelligibility and quality of the recovered speech from noisy environments, which has the potential to improve both speech processing applications and signal processing strategies for hearing aid technology.


CHENG GAO

Mining Incomplete Numerical Data Sets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Richard Wang
Tyrone Duncan
Xuemin Tu*

Abstract

Incomplete and numerical data are common for many application domains. There have been many approaches to handle missing data in statistical analysis and data mining. To deal with numerical data, discretization is crucial for many machine learning algorithms. However, few work has been done for discretization on incomplete data.

This research mainly focuses on the question whether conducting discretization as preprocessing gives better results than using a data mining method alone. Multiple Scanning is an entropy based discretization method. Previous research shown that it outperforms commonly used discretization methods: Equal Width or Equal Frequency discretization. In this work, Multiple Scanning is tested on C4.5 and MLEM2 on in- complete numerical data sets. Results show for some data sets, the setup utilizing Multiple Scanning as preprocessing performs better, for the other data sets, C4.5 or MLEM2 should be used by themselves. Our secondary objective is to test which of the three known interpretations of missing attribute value is better when using MLEM2. Results show that running MLEM2 on data sets with attribute-concept values performs worse than the other two types of missing values. Last, we compared error rate be- tween C4.5 combined with Multiple Scanning (MS-C4.5) and MLEM2 combined with Multiple Scanning (MS-MLEM2) on data sets with different percentage of missing at- tribute values. Possible rules induced by MS-MLEM2 give a better result on data sets with "do-not-care" conditions. MS-C4.5 is preferred on data sets with lost values and attribute-concept values.

Our conclusion is that there are no universal optimal solutions for all data sets. Setup should be custom-made based on the data sets.

 


GOVIND VEDALA

Digital Compensation of Transmission Impairments in Multicarrier Fiber Optic Systems

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Erik Perrins
Alessandro Salandrino
Carey Johnson*

Abstract

Time and again, fiber optic medium has proved to be the best means for transporting global data traffic which is following an exponential growth trajectory. High bandwidth applications based on cloud, virtual reality and big data, necessitates maximum effective utilization of available fiber bandwidth. To this end, multicarrier superchannel transmission systems, aided by robust digital signal processing both at transmitter and receiver, have proved to enhance spectral efficiency and achieve multi tera-bit per second data rates.

With respect to transmission sources, laser technology too has made significant strides, especially in the domain of multiwavelength sources such as quantum dot passive mode-locked laser (QD-PMLL) based optical frequency combs. In the present research work, we characterize the phase dynamics of comb lines from a QD-PMLL based on a novel multiheterodyne coherent detection technique. The inherently broad linewidth of comb lines which is in the order of tens of MHz, make it difficult for conventional digital phase noise compensation algorithms to track the large phase noise especially for low baud rate subcarriers using higher cardinality modulation formats. In the context of multi-subcarrier Nyquist pulse shaped superchannel transmission system with coherent detection, we demonstrate through measurements, an efficient phase noise compensation technique called “Digital Mixing” which exploits the mutual phase coherence among the comb lines. For QPSK and 16 QAM modulation formats, digital mixing provided significant improvement in bit error rate (BER) performance.  For short reach data center and passive optical network-based applications, which adopt direct detection, a single optical amplifier is generally used meet the power budget requirements to achieve the desired BER.  Semiconductor Optical Amplifier (SOA) with its small form factor, is a low-cost power booster that can be designed to operate in any desired wavelength and most importantly can be integrated with the transmitter. However, saturated SOAs introduce nonlinear distortions on the amplified signal. Alongside SOA, the photodiode also introduces nonlinear mixing in the form of Signal-Signal Beat Interference (SSBI). In this research, we study the impact of SOA nonlinearity on the effectiveness of SSBI compensation in a direct detection OFDM based transmission system. We experimentally demonstrate a digital compensation technique to undo the SOA nonlinearity effect by digitally back-propagating the received signal through a virtual SOA, thereby effectively eliminating the SSBI. ​


VENKAT ANIRUDH YERRAPRAGADA

Comparison of Minimum Cost Perfect Matching Algorithms in solving the Chinese Postman Problem

When & Where:


2001B Eaton Hall

Committee Members:

Man Kong, Chair
Perry Alexander
Jerzy Grzymala-Busse


Abstract

The Chinese Postman Problem also known as Route Inspection Problem is a famous arc routing problem in Graph theory. In this problem, a postman has to deliver mail to the streets such that all the streets are visited at least once and return to his starting point. The problem is to find out a path called the optimal postman tour such that the distance travelled by the postman by following this path is always the minimum distance that has to be travelled to visit all the streets at least once. In graph theory, we represent the street system as a weighted graph whose edges represent the streets and the street intersections are represented by the vertices. A graph can be directed, undirected or a mixed graph. Directed and undirected edges represent the one way and the two way streets respectively. A mixed graph has both the directed and undirected edges.

The Chinese postman problem can be divided into several sub problems of which finding the minimum cost perfect matching is the critical part. For a directed graph, the minimum cost perfect matching of a bipartite graph has to be computed. For an undirected graph, the minimum cost perfect matching of a general graph has to be computed. There are different matching algorithms to compute the minimum cost perfect matching efficiently. In this project, I have understood and implemented four different matching algorithms used in computing an optimal postman tour, the Edmond’s Blossom Algorithm and a Branch and Bound Algorithm for the directed graph and the Hungarian Algorithm and a Branch and Bound Algorithm for the undirected graph. The objective of this project is to compare the performance of these matching algorithms on graphs of different sizes and densities."


SRI MOUNICA MOTIPALLI

Analysis of Privacy Protection Mechanisms in Social Networks using the Social Circle Model

When & Where:


2001B Eaton Hall

Committee Members:

Bo Luo, Chair
Perry Alexander
Jerzy Grzymala-Busse


Abstract

Many online social networks are increasingly being used as information sharing platforms. With a massive increase in the number of users participating in information sharing, an enormous amount of information becomes available on such sites. It is vital to preserve user’s privacy, without preventing them from socialization. Unfortunately, many existing models overlooked a very important fact, that is, a user may want different information boundary preference for different information. To address this short coming, in this paper, I will introduce a ‘social circle’ model, which follows the concepts of ‘private information boundaries’ and ‘restricted access and limited control’. While facilitating socialization, the social circle model also provides some privacy protection capabilities. I then utilize this model to analyze the most popular social networks (such as Facebook, Google+, VKontakte, Flickr, and Instagram) and demonstrate the potential privacy vulnerabilities in some of these networking sites. Lastly, I discuss the implication of the analysis and possible future directions.