Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 129 (Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.    

Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.          

Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems. 

This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing. 

Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Pramil Paudel

Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless Imaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao

Abstract

Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference. 

We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks. 


Past Defense Notices

Dates

NIHARIKA GANDHARI

A Comparative Study on Strategies of Rule Induction for Incomplete Data

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Perry Alexander
Bo Luo


Abstract

Rule Induction is one of the major applications of rough set theory. However, traditional rough set models cannot deal with incomplete data sets. Missing values can be handled by data pre-processing or extension of rough set models. Two data pre-processing methods and one extension of the rough set model are considered in this project. These being filling in missing values with most common data, ignoring objects by deleting records and extended discernibility matrix. The objective is to compare these methods in terms of stability and effectiveness. All three methods have same rule induction method and are analyzed based on test accuracy and missing attribute level percentage. To better understand the properties of these approaches, eight real-life data-sets with varying level of missing attribute values are used for testing. Based on the results, we discuss the relative merits of three approaches in an attempt to decide upon optimal approach. The final conclusion is that the best method is to use a pre-processing method which is filling in missing values with most common data.​


MADHU CHEGONDI

A Comparison of Leaving-one-out and Bootstrap

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Richard Wang


Abstract

Recently machine learning has created significant advancements in many areas like health, finance, education, sports, etc. which has encouraged the development of many predictive models. In machine learning, we extract hidden, previously unknown, and potentially useful high-level information from low-level data. Cross-validation is a typical strategy for estimating the performance. It simulates the process of fitting to different datasets and seeing how different predictions can be. In this project, we review accuracy estimation methods and compare two resampling methods, such as leaving-one-out and bootstrap. We compared these validation methods using LEM1 rule induction algorithm. Our results indicate that for real-world datasets similar to ours, bootstrap may be optimistic.


PATRICK McCORMICK

Design and Optimization of Physical Waveform-Diverse and Spatially-Diverse Emissions

When & Where:


129 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Chris Allen
Alessandro Salandrino
Jim Stiles
Emily Arnold*

Abstract

With the advancement of arbitrary waveform generation techniques, new radar transmission modes can be designed via precise control of the waveform's time-domain signal structure. The finer degree of emission control for a waveform (or multiple waveforms via a digital array) presents an opportunity to reduce ambiguities in the estimation of parameters within the radar backscatter. While this freedom opens the door to new emission capabilities, one must still consider the practical attributes for radar waveform design. Constraints such as constant amplitude (to maintain sufficient power efficiency) and continuous phase (for spectral containment) are still considered prerequisites for high-powered radar waveforms. These criteria are also applicable to the design of multiple waveforms emitted from an antenna array in a multiple-input multiple-output (MIMO) mode.

In this work, two spatially-diverse radar emission design methods are introduced that provide constant amplitude, spectrally-contained waveforms. The first design method, denoted as spatial modulation, designs the radar waveforms via a polyphase-coded frequency-modulated (PCFM) framework to steer the coherent mainbeam of the emission within a pulse. The second design method is an iterative scheme to generate waveforms that achieve a desired wideband and/or widebeam radar emission. However, a wideband and widebeam emission can place a portion of the emitted energy into what is known as the `invisible' space of the array, which is related to the storage of reactive power that can damage a radar transmitter. The proposed design method purposefully avoids this space and a quantity denoted as the Fractional Reactive Power (FRP) is defined to assess the quality of the result.

The design of FM waveforms via traditional gradient-based optimization methods is also considered. A waveform model is proposed that is a generalization of the PCFM implementation, denoted as coded-FM (CFM), which defines the phase of the waveform via a summation of weighted, predefined basis functions. Therefore, gradient-based methods can be used to minimize a given cost function with respect to a finite set of optimizable parameters. A generalized integrated sidelobe metric is used as the optimization cost function to minimize the correlation range sidelobes of the radar waveform


MATT KITCHEN

Blood Phantom Concentration Measurement Using An I-Q Receiver Design

When & Where:


250 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Jim Stiles


Abstract

Near-infrared spectroscopy has been used as a non-invasive method of determining concentration of chemicals within living tissues of living organisms.  This method employs LEDs of specific frequencies to measure concentration of blood constituents according to the Beer-Lambert Law.  One group of instruments (frequency domain instruments) is based on amplitude modulation of the laser diode or LED light intensity, the measurement of light adsorption and the measurement of modulation phase shift to determine light path length for use in Beer-Lambert Law. This paper describes the design and demonstration of a frequency domain instrument for measuring concentration of oxygenated and de-oxygenated hemoglobin using incoherent optics and an in-phase quadrature (I-Q) receiver design.  The design has been shown to be capable of resolving variations of concentration of test samples and a viable prototype for future, more precise, tools.

 


LIANYU LI

Wireless Power Transfer

When & Where:


250 Nichols Hall

Committee Members:

Alessandro Salandrino, Chair
Reza Ahmadi
Ron Hui


Abstract

Wireless Power Transfer is commonly known as that electrical energy transfer from source to load in some certain distance without any wire connecting in between. It has been almost two hundred when people first noticed the electromagnetic induction phenomenon. After that, Nikola Tesla tried to use this concept to build the first wireless power transfer device. Today, the most common technic is used for transfer power wirelessly is known as inductive coupling. It has revolutionized the transmission of power in various application.  Wireless power transfer is one of the simplest and inexpensive way to transfer energy, and it will change the behavior of how people are going to use their devices.

With the development of science and technology. A new method of wireless power transfer through the coupled magnetic resonances could be the next technology that bring the future nearer. It significantly increases the transmission distance and efficiency. This project shows that this is very simple way to charge the low power devices wirelessly by using coupled magnetic resonances. It also presents how easy to set up the system compare to the conventional copper cables and current carrying wire.


TONG XU

Real-Time DSP Enabled Multi-Carrier Cross-Connect for Optical Systems

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Esam El-Araby
Erik Perrins
Hui Zhao*

Abstract

Owning to the ever-increasing data traffic in today’s optical network, how to utilize the optical bandwidth more efficiently has become a critical issue. Optical wavelength division multiplexing (WDM) multiplexes multiple optical carrier signals into a single fiber by using different wavelengths of laser light. Optical cross-connect (OXC) and switches based on optical WDM can greatly improves the performance of optical networks, which results in reduced complexity, signal transparency, and significant electrical energy saving. However, OXC alone cannot fully exploit the availability of optical bandwidth due to its coarse bandwidth granularity imposed by optical filtering. Thus, OXC may not meet the requirements of some applications when the sub-band has a small bandwidth. In order to achieve smaller bandwidth granularities, electrical digital cross-connect (DXC) could be added to the current optical network.

In this work, we proposed a scheme of real-time digital signal processing (DSP) enabled multi-carrier cross-connect which can dynamically assign bandwidth and allocates power to each individual subcarrier channel. This cross-connect is based on digital sub-carrier multiplexing (DSCM), which is a frequency division multiplexing (FDM) technique. Either Nyquist WDM (N-WDM) or orthogonal frequency division multiplexing (OFDM) can be used to implement real-time enabled DSCM. DSCM multiplexes the digital created subcarriers on each optical wavelength. Compared with optical WDM, DSCM has a smaller bandwidth granularity because it multiplexes sub-carriers in electrical domain. DSCM also provides more flexibility since operations such as distortion compensation and signal regeneration could be conducted by using DSP algorithms.

We built a real-time DSP platform based on a Virtex7 FPGA, which allows the test of real-time DSP algorithms for multi-carrier cross-connect in optical systems. We have implemented a real-time DSP enabled multi-carrier cross-connect based on up/down sampling and filtering. This technique can save the DSP resources since local oscillators (LO) are not needed in spectral translation. We got some preliminary results from theoretical analysis, simulation and experiment. The performance and resource cost of this cross-connect has been analyzed. This real-time DSP enabled cross-connect also has the potential to reduce the cost in applications such as the mobile Fronthaul in 5G next-generation wireless networks.

 


RAHUL KAKANI

Discretization Based on Entropy and Multiple Scanning

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Man Kong
Prasad Kulkarni


Abstract

Enormous amount of data is being generated due to advancement in technology. The basic question of discovering knowledge from the data generated is still pertinent. Data mining guides us in discovering patterns or rules. Rules are frequently identified by a technique known as rule induction, which is regarded as the benchmark technique in data mining primarily developed to handle symbolic data. Real life data often consists of numerical attributes and hence, in order to completely utilize the power of rule induction, a form of preprocessing step is involved which converts numeric data into symbolic data known as discretization.

We present two entropy-based discretization techniques known as dominant attribute and multiple scanning approach, respectively. These approaches were implemented as two explicit algorithms in C# programming language and applied on nine well known numerical data sets. For every dataset in multiple scanning approach, experiment was repeated with incremental scans until interval counts were stable. Preliminary results suggest that multiple scanning approach performs better than dominant attribute approach in terms of producing comparatively smaller and simpler error rate.

 


SHADI PIR HOSSEINLOO

Supervised Speech Separation Based on Deep Neural Network

When & Where:


317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
Jonathan Brumbergm Co-Chair
Erik Perrins
Dave Petr
John Hansen

Abstract

In real world environments, the speech signals received by our ears are usually a combination of different sounds that include not only the target speech, but also acoustic interference like music, background noise, and competing speakers. This interference has negative effect on speech perception and degrades the performance of speech processing applications such as automatic speech recognition (ASR), and hearing aid devices. One way to solve this problem is using source separation algorithms to separate the desired speech from the interfering sounds. Many source separation algorithms have been proposed to improve the performance of ASR systems and hearing aid devices, but it is still challenging for these systems to work efficiently in noisy and reverberant environments. On the other hand, humans have a remarkable ability to separate desired sounds and listen to a specific talker among noise and other talkers. Inspired by the capabilities of human auditory system, a popular method known as auditory scene analysis (ASA) was proposed to separate different sources in a two stage process of segmentation and grouping. The main goal of source separation in ASA is to estimate time frequency masks that optimally match and separate noise signals from a mixture of speech and noise. Three major aims are proposed to improve upon source separation in noisy and reverberant acoustic signals. First, a simple and novel algorithm is proposed to increase the discriminability between two sound sources by magnifying the head-related transfer function of the interfering source. Experimental results show a significant increase in the quality of the recovered target speech. Second, a time frequency masking-based source separation algorithm is proposed that can separate a male speaker from a female speaker in reverberant conditions by using the spatial cues of the sources. Furthermore, the proposed algorithm also has the ability to preserve the location of the sources after separation.

Finally, a supervised speech separation algorithm is proposed based on deep neural networks to estimate the time frequency masks. Initial experiments show promising results for separating sources in noisy and reverberant condition. Continued work is focused on identifying the best network training features and network structure that are robust to different types of noise, speakers, and reverberation. The main goal of the proposed algorithm is to increase the intelligibility and quality of the recovered speech from noisy environments, which has the potential to improve both speech processing applications and signal processing strategies for hearing aid technology.


CHENG GAO

Mining Incomplete Numerical Data Sets

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Bo Luo
Richard Wang
Tyrone Duncan
Xuemin Tu*

Abstract

Incomplete and numerical data are common for many application domains. There have been many approaches to handle missing data in statistical analysis and data mining. To deal with numerical data, discretization is crucial for many machine learning algorithms. However, few work has been done for discretization on incomplete data.

This research mainly focuses on the question whether conducting discretization as preprocessing gives better results than using a data mining method alone. Multiple Scanning is an entropy based discretization method. Previous research shown that it outperforms commonly used discretization methods: Equal Width or Equal Frequency discretization. In this work, Multiple Scanning is tested on C4.5 and MLEM2 on in- complete numerical data sets. Results show for some data sets, the setup utilizing Multiple Scanning as preprocessing performs better, for the other data sets, C4.5 or MLEM2 should be used by themselves. Our secondary objective is to test which of the three known interpretations of missing attribute value is better when using MLEM2. Results show that running MLEM2 on data sets with attribute-concept values performs worse than the other two types of missing values. Last, we compared error rate be- tween C4.5 combined with Multiple Scanning (MS-C4.5) and MLEM2 combined with Multiple Scanning (MS-MLEM2) on data sets with different percentage of missing at- tribute values. Possible rules induced by MS-MLEM2 give a better result on data sets with "do-not-care" conditions. MS-C4.5 is preferred on data sets with lost values and attribute-concept values.

Our conclusion is that there are no universal optimal solutions for all data sets. Setup should be custom-made based on the data sets.

 


GOVIND VEDALA

Digital Compensation of Transmission Impairments in Multicarrier Fiber Optic Systems

When & Where:


246 Nichols Hall

Committee Members:

Ron Hui, Chair
Chris Allen
Erik Perrins
Alessandro Salandrino
Carey Johnson*

Abstract

Time and again, fiber optic medium has proved to be the best means for transporting global data traffic which is following an exponential growth trajectory. High bandwidth applications based on cloud, virtual reality and big data, necessitates maximum effective utilization of available fiber bandwidth. To this end, multicarrier superchannel transmission systems, aided by robust digital signal processing both at transmitter and receiver, have proved to enhance spectral efficiency and achieve multi tera-bit per second data rates.

With respect to transmission sources, laser technology too has made significant strides, especially in the domain of multiwavelength sources such as quantum dot passive mode-locked laser (QD-PMLL) based optical frequency combs. In the present research work, we characterize the phase dynamics of comb lines from a QD-PMLL based on a novel multiheterodyne coherent detection technique. The inherently broad linewidth of comb lines which is in the order of tens of MHz, make it difficult for conventional digital phase noise compensation algorithms to track the large phase noise especially for low baud rate subcarriers using higher cardinality modulation formats. In the context of multi-subcarrier Nyquist pulse shaped superchannel transmission system with coherent detection, we demonstrate through measurements, an efficient phase noise compensation technique called “Digital Mixing” which exploits the mutual phase coherence among the comb lines. For QPSK and 16 QAM modulation formats, digital mixing provided significant improvement in bit error rate (BER) performance.  For short reach data center and passive optical network-based applications, which adopt direct detection, a single optical amplifier is generally used meet the power budget requirements to achieve the desired BER.  Semiconductor Optical Amplifier (SOA) with its small form factor, is a low-cost power booster that can be designed to operate in any desired wavelength and most importantly can be integrated with the transmitter. However, saturated SOAs introduce nonlinear distortions on the amplified signal. Alongside SOA, the photodiode also introduces nonlinear mixing in the form of Signal-Signal Beat Interference (SSBI). In this research, we study the impact of SOA nonlinearity on the effectiveness of SSBI compensation in a direct detection OFDM based transmission system. We experimentally demonstrate a digital compensation technique to undo the SOA nonlinearity effect by digitally back-propagating the received signal through a virtual SOA, thereby effectively eliminating the SSBI. ​