Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Jennifer Quirk

Aspects of Doppler-Tolerant Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Patrick McCormick
Charles Mohr
James Stiles
Zsolt Talata

Abstract

The Doppler tolerance of a waveform refers to its behavior when subjected to a fast-time Doppler shift imposed by scattering that involves nonnegligible radial velocity. While previous efforts have established decision-based criteria that lead to a binary judgment of Doppler tolerant or intolerant, it is also useful to establish a measure of the degree of Doppler tolerance. The purpose in doing so is to establish a consistent standard, thereby permitting assessment across different parameterizations, as well as introducing a Doppler “quasi-tolerant” trade-space that can ultimately inform automated/cognitive waveform design in increasingly complex and dynamic radio frequency (RF) environments. 

Separately, the application of slow-time coding (STC) to the Doppler-tolerant linear FM (LFM) waveform has been examined for disambiguation of multiple range ambiguities. However, using STC with non-adaptive Doppler processing often results in high Doppler “cross-ambiguity” side lobes that can hinder range disambiguation despite the degree of separability imparted by STC. To enhance this separability, a gradient-based optimization of STC sequences is developed, and a “multi-range” (MR) modification to the reiterative super-resolution (RISR) approach that accounts for the distinct range interval structures from STC is examined. The efficacy of these approaches is demonstrated using open-air measurements. 

The proposed work to appear in the final dissertation focuses on the connection between Doppler tolerance and STC. The first proposal includes the development of a gradient-based optimization procedure to generate Doppler quasi-tolerant random FM (RFM) waveforms. Other proposals consider limitations of STC, particularly when processed with MR-RISR. The final proposal introduces an “intrapulse” modification of the STC/LFM structure to achieve enhanced sup pression of range-folded scattering in certain delay/Doppler regions while retaining a degree of Doppler tolerance.


Mary Jeevana Pudota

Assessing Processor Allocation Strategies for Online List Scheduling of Moldable Task Graphs

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Scheduling a graph of moldable tasks, where each task can be executed by a varying number of

processors with execution time depending on the processor allocation, represents a fundamental

problem in high-performance computing (HPC). The online version of the scheduling problem

introduces an additional constraint: each task is only discovered when all its predecessors have

been completed. A key challenge for this online problem lies in making processor allocation

decisions without complete knowledge of the future tasks or dependencies. This uncertainty can

lead to inefficient resource utilization and increased overall completion time, or makespan. Recent

studies have provided theoretical analysis (i.e., derived competitive ratios) for certain processor

allocation algorithms. However, the algorithms’ practical performance remains under-explored,

and their reliance on fixed parameter settings may not consistently yield optimal performance

across varying workloads. In this thesis, we conduct a comprehensive evaluation of three processor

allocation strategies by empirically assessing their performance under widely used speedup models

and diverse graph structures. These algorithms are integrated into a List scheduling framework that

greedily schedules ready tasks based on the current processor availability. We perform systematic

tuning of the algorithms’ parameters and report the best observed makespan together with the

corresponding parameter settings. Our findings highlight the critical role of parameter tuning in

obtaining optimal makespan performance, regardless of the differences in allocation strategies.

The insights gained in this study can guide the deployment of these algorithms in practical runtime

systems.


Past Defense Notices

Dates

TEJASWINI JAGARLAMUDI

Community-based Content Analysis of the Pulse Night Club Shooting

When & Where:


2001B Eaton Hall

Committee Members:

Nicole Beckage, Chair
Prasad Kulkarni
Fengjun Li


Abstract

On June 12, 2016, 49 people were killed and another 58 wounded in an attack at Pulse Nightclub in Orlando Florida. This incident was regarded as both hate crime against LGBT people and as a terrorist attack. This project focuses on analyzing tweets a week after the terrorist attack, specifically looking at how different communities within twitter were discussing this event. To understand how the twitter users in different communities are discussing this event, a set of hashtag frequency-based evaluation measures and simulations are proposed. The simulations are used to assess the specific hashtag content of a community. Using community detection algorithms and text analysis tools, significant topics that specific communities are discussing and  the topics that are being avoided by those communities are discovered.


NIHARIKA GANDHARI

A Comparative Study on Strategies of Rule Induction for Incomplete Data

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Perry Alexander
Bo Luo


Abstract

Rule Induction is one of the major applications of rough set theory. However, traditional rough set models cannot deal with incomplete data sets. Missing values can be handled by data pre-processing or extension of rough set models. Two data pre-processing methods and one extension of the rough set model are considered in this project. These being filling in missing values with most common data, ignoring objects by deleting records and extended discernibility matrix. The objective is to compare these methods in terms of stability and effectiveness. All three methods have same rule induction method and are analyzed based on test accuracy and missing attribute level percentage. To better understand the properties of these approaches, eight real-life data-sets with varying level of missing attribute values are used for testing. Based on the results, we discuss the relative merits of three approaches in an attempt to decide upon optimal approach. The final conclusion is that the best method is to use a pre-processing method which is filling in missing values with most common data.​


MADHU CHEGONDI

A Comparison of Leaving-one-out and Bootstrap

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang


Abstract

Recently machine learning has created significant advancements in many areas like health, finance, education, sports, etc. which has encouraged the development of many predictive models. In machine learning, we extract hidden, previously unknown, and potentially useful high-level information from low-level data. Cross-validation is a typical strategy for estimating the performance. It simulates the process of fitting to different datasets and seeing how different predictions can be. In this project, we review accuracy estimation methods and compare two resampling methods, such as leaving-one-out and bootstrap. We compared these validation methods using LEM1 rule induction algorithm. Our results indicate that for real-world datasets similar to ours, bootstrap may be optimistic.


PATRICK McCORMICK

Design and Optimization of Physical Waveform-Diverse and Spatially-Diverse Emissions

When & Where:


129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Chris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*

Abstract

With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.

In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.

The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform


MATT KITCHEN

Blood Phantom Concentration Measurement Using An I-Q Receiver Design

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Jim Stiles


Abstract

Near-infrared spectroscopy has been used as a non-invasive method of determining concentration of chemicals within living tissues of living organisms.  This method employs LEDs of specific frequencies to measure concentration of blood constituents according to the Beer-Lambert Law.  One group of instruments (frequency domain instruments) is based on amplitude modulation of the laser diode or LED light intensity, the measurement of light adsorption and the measurement of modulation phase shift to determine light path length for use in Beer-Lambert Law. This paper describes the design and demonstration of a frequency domain instrument for measuring concentration of oxygenated and de-oxygenated hemoglobin using incoherent optics and an in-phase quadrature (I-Q) receiver design.  The design has been shown to be capable of resolving variations of concentration of test samples and a viable prototype for future, more precise, tools.

 


LIANYU LI

Wireless Power Transfer

When & Where:


250 Nichols Hall

Committee Members:

Alessandro Salandrino, Chair
Reza Ahmadi
Ron Hui


Abstract

Wireless Power Transfer is commonly known as that electrical energy transfer from source to load in some certain distance without any wire connecting in between. It has been almost two hundred when people first noticed the electromagnetic induction phenomenon. After that, Nikola Tesla tried to use this concept to build the first wireless power transfer device. Today, the most common technic is used for transfer power wirelessly is known as inductive coupling. It has revolutionized the transmission of power in various application.  Wireless power transfer is one of the simplest and inexpensive way to transfer energy, and it will change the behavior of how people are going to use their devices.

With the development of science and technology. A new method of wireless power transfer through the coupled magnetic resonances could be the next technology that bring the future nearer. It significantly increases the transmission distance and efficiency. This project shows that this is very simple way to charge the low power devices wirelessly by using coupled magnetic resonances. It also presents how easy to set up the system compare to the conventional copper cables and current carrying wire.


TONG XU

Real-Time DSP Enabled Multi-Carrier Cross-Connect for Optical Systems

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Esam El-Araby
Erik Perrins
Hui Zhao*

Abstract

Owning to the ever-increasing data traffic in today’s optical network, how to utilize the optical bandwidth more efficiently has become a critical issue. Optical wavelength division multiplexing (WDM) multiplexes multiple optical carrier signals into a single fiber by using different wavelengths of laser light. Optical cross-connect (OXC) and switches based on optical WDM can greatly improves the performance of optical networks, which results in reduced complexity, signal transparency, and significant electrical energy saving. However, OXC alone cannot fully exploit the availability of optical bandwidth due to its coarse bandwidth granularity imposed by optical filtering. Thus, OXC may not meet the requirements of some applications when the sub-band has a small bandwidth. In order to achieve smaller bandwidth granularities, electrical digital cross-connect (DXC) could be added to the current optical network.

In this work, we proposed a scheme of real-time digital signal processing (DSP) enabled multi-carrier cross-connect which can dynamically assign bandwidth and allocates power to each individual subcarrier channel. This cross-connect is based on digital sub-carrier multiplexing (DSCM), which is a frequency division multiplexing (FDM) technique. Either Nyquist WDM (N-WDM) or orthogonal frequency division multiplexing (OFDM) can be used to implement real-time enabled DSCM. DSCM multiplexes the digital created subcarriers on each optical wavelength. Compared with optical WDM, DSCM has a smaller bandwidth granularity because it multiplexes sub-carriers in electrical domain. DSCM also provides more flexibility since operations such as distortion compensation and signal regeneration could be conducted by using DSP algorithms.

We built a real-time DSP platform based on a Virtex7 FPGA, which allows the test of real-time DSP algorithms for multi-carrier cross-connect in optical systems. We have implemented a real-time DSP enabled multi-carrier cross-connect based on up/down sampling and filtering. This technique can save the DSP resources since local oscillators (LO) are not needed in spectral translation. We got some preliminary results from theoretical analysis, simulation and experiment. The performance and resource cost of this cross-connect has been analyzed. This real-time DSP enabled cross-connect also has the potential to reduce the cost in applications such as the mobile Fronthaul in 5G next-generation wireless networks.

 


RAHUL KAKANI

Discretization Based on Entropy and Multiple Scanning

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Man Kong
Prasad Kulkarni


Abstract

Enormous amount of data is being generated due to advancement in technology. The basic question of discovering knowledge from the data generated is still pertinent. Data mining guides us in discovering patterns or rules. Rules are frequently identified by a technique known as rule induction, which is regarded as the benchmark technique in data mining primarily developed to handle symbolic data. Real life data often consists of numerical attributes and hence, in order to completely utilize the power of rule induction, a form of preprocessing step is involved which converts numeric data into symbolic data known as discretization.

We present two entropy-based discretization techniques known as dominant attribute and multiple scanning approach, respectively. These approaches were implemented as two explicit algorithms in C# programming language and applied on nine well known numerical data sets. For every dataset in multiple scanning approach, experiment was repeated with incremental scans until interval counts were stable. Preliminary results suggest that multiple scanning approach performs better than dominant attribute approach in terms of producing comparatively smaller and simpler error rate.

 


SHADI PIR HOSSEINLOO

Supervised Speech Separation Based on Deep Neural Network

When & Where:


317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Jonathan Brumbergm Co-Chair
Erik Perrins
Dave Petr
John Hansen

Abstract

In real world environments, the speech signals received by our ears are usually a combination of different sounds that include not only the target speech, but also acoustic interference like music, background noise, and competing speakers. This interference has negative effect on speech perception and degrades the performance of speech processing applications such as automatic speech recognition (ASR), and hearing aid devices. One way to solve this problem is using source separation algorithms to separate the desired speech from the interfering sounds. Many source separation algorithms have been proposed to improve the performance of ASR systems and hearing aid devices, but it is still challenging for these systems to work efficiently in noisy and reverberant environments. On the other hand, humans have a remarkable ability to separate desired sounds and listen to a specific talker among noise and other talkers. Inspired by the capabilities of human auditory system, a popular method known as auditory scene analysis (ASA) was proposed to separate different sources in a two stage process of segmentation and grouping. The main goal of source separation in ASA is to estimate time frequency masks that optimally match and separate noise signals from a mixture of speech and noise. Three major aims are proposed to improve upon source separation in noisy and reverberant acoustic signals. First, a simple and novel algorithm is proposed to increase the discriminability between two sound sources by magnifying the head-related transfer function of the interfering source. Experimental results show a significant increase in the quality of the recovered target speech. Second, a time frequency masking-based source separation algorithm is proposed that can separate a male speaker from a female speaker in reverberant conditions by using the spatial cues of the sources. Furthermore, the proposed algorithm also has the ability to preserve the location of the sources after separation.

Finally, a supervised speech separation algorithm is proposed based on deep neural networks to estimate the time frequency masks. Initial experiments show promising results for separating sources in noisy and reverberant condition. Continued work is focused on identifying the best network training features and network structure that are robust to different types of noise, speakers, and reverberation. The main goal of the proposed algorithm is to increase the intelligibility and quality of the recovered speech from noisy environments, which has the potential to improve both speech processing applications and signal processing strategies for hearing aid technology.


CHENG GAO

Mining Incomplete Numerical Data Sets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Richard Wang
Tyrone Duncan
Xuemin Tu*

Abstract

Incomplete and numerical data are common for many application domains. There have been many approaches to handle missing data in statistical analysis and data mining. To deal with numerical data, discretization is crucial for many machine learning algorithms. However, few work has been done for discretization on incomplete data.

This research mainly focuses on the question whether conducting discretization as preprocessing gives better results than using a data mining method alone. Multiple Scanning is an entropy based discretization method. Previous research shown that it outperforms commonly used discretization methods: Equal Width or Equal Frequency discretization. In this work, Multiple Scanning is tested on C4.5 and MLEM2 on in- complete numerical data sets. Results show for some data sets, the setup utilizing Multiple Scanning as preprocessing performs better, for the other data sets, C4.5 or MLEM2 should be used by themselves. Our secondary objective is to test which of the three known interpretations of missing attribute value is better when using MLEM2. Results show that running MLEM2 on data sets with attribute-concept values performs worse than the other two types of missing values. Last, we compared error rate be- tween C4.5 combined with Multiple Scanning (MS-C4.5) and MLEM2 combined with Multiple Scanning (MS-MLEM2) on data sets with different percentage of missing at- tribute values. Possible rules induced by MS-MLEM2 give a better result on data sets with "do-not-care" conditions. MS-C4.5 is preferred on data sets with lost values and attribute-concept values.

Our conclusion is that there are no universal optimal solutions for all data sets. Setup should be custom-made based on the data sets.