Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 129 (Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

The RF spectrum is a precious, finite resource with ever-increasing demand. Consequently, the mandate to be a "good spectral neighbor" is in direct conflict with the requirements for high-performance sensing where correlation error is fundamentally limited. As such, matched-filter radar performance is often sidelobe-limited with estimation error being constrained by the time-bandwidth (TB) of the collective emission. The methods developed here seek to bridge this gap between idealized radar performance and practical utility via waveform design.    

Estimation error becomes more complex when employing pulse-agility. In doing so, range-sidelobe modulation (RSM) spreads energy across Doppler, rendering traditional methods ineffective. To address this, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining subsets within a pulse-agile emission. In contrast to the majority of complementary signals, explored via phase-coding, these Comp-FM waveform subsets achieve CSC while preserving hardware-compatibility since they are FM (though design distortion is never completely avoided). Although Comp-FM addressed practicality via hardware amenability, CSC was localized to zero-Doppler. This work expands the Comp-FM notion to a Doppler-generalized (DG) framework, extending the cancellation condition to an arbitrary span. The same framework can likewise be employed to jointly optimize an entire coherent processing interval (CPI) to minimize RSM within the radar point-spread-function (PSF), thereby generalizing the notion of complementarity and introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori.          

Sensing with a single emitter is limited by self-inflicted error alone (e.g., clutter, sidelobes), while MIMO systems must additionally contend with the cross-responses from emitters operating concurrently (e.g., simultaneously, spatially proximate, in a shared spectrum), further degrading radar sensitivity. Now, total correlation error is dictated by the overlapping TB (i.e., how coincident are the signals) and number of operating emitters, compounding difficulty to estimate if left unaddressed. As such, the determination of "orthogonal waveforms" comprises a large portion of MIMO literature, though remains a phenomenological misnomer for pulsed emissions. Here, the notion of complementary-FM is applied to a multi-emitter context in which transmitter-amenable quasi-orthogonal subsets, occupying the same spectral band, are produced via a similar gradient-based approach. To further practicalize these MIMO-Comp-FM waveform subsets, the same "DG" approach described above, addressing the otherwise-default Doppler-induced degradation of complementary signals, is applied. In doing so, Doppler-independent separability and complementarity greatly improves estimation sensitivity for multi-emitter systems. 

This MIMO-Comp-FM framework is developed for standard matched filter processing. Coupling this framework with a "DG" form of the previously explored MIMO-MiCRFt is also investigated, illustrating the added benefit of pairing optimized subsets with similarly calibrated processing. 

Each of these methods is developed to address unique and increasingly complex sources of estimation error. All approaches are initially developed and evaluated via simulated analysis where ground-truth is known. Then, despite hardware-induced distortion being unavoidable, the MIMO-Comp-FM framework is confirmed via loopback measurements to preserve the majority of CSC that was observed in simulation. Finally, open-air demonstration of each approach validates practical utility on a radar system.


Hao Xuan

Toward an Integrated Computational Framework for Metagenomics: From Sequence Alignment to Automated Knowledge Discovery

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Metagenomic sequencing has become a central paradigm for studying complex microbial communities and their interactions with the host, with emerging applications in clinical prediction and disease modeling. In this work, we first investigate two representative application scenarios: predicting immune checkpoint inhibitor response in non-small cell lung cancer using gut microbial signatures, and characterizing host–microbiome interactions in neonatal systems. The proposed reference-free neural network captures both compositional and functional signals without reliance on reference genomes, while the neonatal study demonstrates how environmental and genetic factors reshape microbial communities and how probiotic intervention can mitigate pathogen-induced immune activation.

These studies highlight both the promise and the inherent difficulty of metagenomic analysis: transforming raw sequencing data into clinically actionable insights remains an algorithmically fragmented and computationally intensive process. This challenge arises from two key limitations: the lack of a unified algorithmic foundation for sequence alignment and the absence of systematic approaches for selecting and organizing analytical tools. Motivated by these challenges, we present a unified computational framework for metagenomic analysis that integrates complementary algorithmic and systems-level solutions.

First, to resolve fragmentation at the alignment level, we develop the Versatile Alignment Toolkit (VAT), a unified algorithmic system for biological sequence alignment across diverse applications. VAT introduces an asymmetric multi-view k-mer indexing scheme that integrates multiple seeding strategies within a single architecture and enables dynamic seed-length adjustment via longest common prefix (LCP)–based inference without re-indexing. A flexible seed-chaining mechanism further supports diverse alignment scenarios, including collinear, rearranged, and split alignments. Combined with a hardware-efficient in-register bitonic sorting algorithm and dynamic index-loading strategy, VAT achieves high efficiency and broad applicability across read mapping, homology search, and whole-genome alignment. Second, to address the challenge of tool selection and pipeline construction, we develop SNAIL, a natural language processing system for automated recognition of bioinformatics tools from large-scale and rapidly growing scientific literature. By integrating XGBoost and Transformer-based models such as SciBERT, SNAIL enables structured extraction of analytical tools and supports automated, reproducible pipeline construction.

Together, this work establishes a unified framework that is grounded in real-world applications and addresses key bottlenecks in metagenomic analysis, enabling more efficient, scalable, and clinically actionable workflows.


Pramil Paudel

Learning Without Seeing: Privacy-Preserving and Adversarial Perspectives in Lensless Imaging

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Fengjun Li, Chair
Alex Bardas
Bo Luo
Cuncong Zhong
Haiyang Chao

Abstract

Conventional computer vision relies on spatially resolved, human-interpretable images, which inherently expose sensitive information and raise privacy concerns. In this study, we explore an alternative paradigm based on lensless imaging, where scenes are captured as diffraction patterns governed by the point spread function (PSF). Although unintelligible to humans, these measurements encode structured, distributed information that remains useful for computational inference. 

We propose a unified framework for privacy-preserving vision that operates directly on lensless sensor measurements by leveraging their frequency-domain and phase-encoded properties. The framework is developed along two complementary directions. First, we enable reconstruction-free inference by exploiting the intrinsic obfuscation of lensless data. We show that semantic tasks such as classification can be performed directly on diffraction patterns using models tailored to non-local, phase-scrambled representations. We further design lensless-aware architectures and integrate them into practical pipelines, including a Swin Transformer-based steganographic framework (DiffHide) for secure and imperceptible information embedding. To assess robustness, we formalize adversarial threat models and develop defenses against learning-based reconstruction attacks, particularly GAN-driven inversion. Second, we investigate the limits of privacy by studying the reconstructability of lensless measurements without explicit knowledge of the forward model. We develop learning-based reconstruction methods that approximate the inverse mapping and analyze conditions under which sensitive information can be recovered. Our results demonstrate that lensless measurements enable effective vision tasks without reconstruction, while providing a principled framework to evaluate and mitigate privacy risks. 


Sharmila Raisa

Digital Coherent Optical System: Investigation and Monitoring

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Erik Perrins
Alessandro Salandrino
Jie Han

Abstract

Coherent wavelength-division multiplexed (WDM) optical fiber systems have become the primary transmission technology for high-capacity data networks, driven by the explosive bandwidth demand of cloud computing, streaming services, and large-scale artificial intelligence training infrastructure. This dissertation investigates two fundamental aspects of digital coherent fiber optic systems under the unifying theme of source and monitoring: the design of multi-wavelength optical sources compatible with high-order coherent detection, and the leveraging of fiber Kerr-effect nonlinearity at the coherent receiver to perform physical-layer link health monitoring and to assess inherent security vulnerabilities — both achieved through digital signal processing of the received complex optical field without dedicated hardware.

We begin by addressing the multi-wavelength transmitter challenge in WDM coherent systems. Existing quantum-dot, quantum-dash, and quantum-well based optical frequency comb (OFC) sources share a common limitation: individual comb line linewidths in the tens of MHz range caused by low output power levels of 1–20 mW, making them incompatible with high-order coherent detection. We demonstrate coherent system application of a single-section InGaAsP QW Fabry-Perot laser diode with greater than 120 mW optical power at the fiber pigtail and 36.14 GHz mode spacing. The high optical power per mode produces Lorentzian equivalent linewidths below 100 kHz — compatible with 16-QAM carrier phase recovery without optical phase locking. Experimental results obtained using a commercial Ciena WaveLogic-Ai coherent transceiver demonstrate 20-channel WDM transmission over 78.3 km of standard single-mode fiber with all channels below the HD-FEC threshold of 3.8 × 10⁻³ at 30 GBaud differential-coded 16-QAM, corresponding to an aggregate capacity of 2.15 Tb/s from a single laser device.

After investigating the QW Fabry-Perot laser as a multi-wavelength source for coherent WDM transmission, we leverage the coherent receiver DSP to exploit fiber Kerr-effect nonlinearity for longitudinal power profile estimation, enabling reconstruction of the signal power distribution P(z) along the full multi-span link without dedicated hardware or traffic interruption. We propose a modified enhanced regular perturbation (ERP) method that corrects two independent physical error sources of the standard RP1 least-squares baseline: the accumulated nonlinear phase rotation, and the dispersion-mediated phase-to-intensity conversion — a second bias source not addressed by prior methods. The RP1 method produces mean absolute error (MAE) that scales quadratically with span count, growing to 1.656 dB at 10 spans and 3 dBm. The modified ERP reduces this to 0.608 dB — an improvement that grows consistently with link length, confirming increasing advantage in the long-haul regime. Extension to WDM through an XPM-aware per-channel formulation achieves MAE of 0.113–0.419 dB across 150–500 km link lengths.

In addition to its role in enabling DSP-based longitudinal power profile estimation, the fiber Kerr-effect nonlinearity is shown to give rise to an inherent physical-layer security vulnerability in coherent WDM systems. We show that an eavesdropper co-tenanting a shared fiber — transmitting a continuous-wave probe at a wavelength adjacent to the legitimate signal — can capture the XPM-induced waveform at the fiber output and apply a bidirectional gated recurrent unit neural network, trained on split-step Fourier method simulation data, to reconstruct the transmitted symbol sequence without physical fiber access and without perturbing the legitimate signal. This eavesdropping mechanism is experimentally validated using a commercial Ciena WaveLogic-Ai coherent transceiver for ASK, BPSK, QPSK, and 16-QAM modulation formats at 4.26 GBaud and 8.56 GBaud over one- and two-span 75 km fiber systems, achieving zero symbol errors under high-OSNR conditions. Noise-aware training over OSNR from 20 to 60 dB maintains symbol error rate below 10⁻² for OSNR above 25–30 dB.

Together, these three contributions demonstrate that the coherent fiber optic system is a versatile physical instrument extending well beyond its role as a data transmission medium. The coherent receiver infrastructure — deployed for high-order modulation and data recovery — simultaneously enables the high-power OFC laser to serve as a practical multi-wavelength transmitter source, and provides the complex field measurement capability through which fiber Kerr-effect nonlinearity can be exploited constructively for distributed link monitoring and, as a direct consequence, reveals an inherent physical-layer security exposure in shared fiber infrastructure. This unified perspective on the coherent system as both a transmission platform and a general-purpose measurement instrument has direct relevance to the design of spectrally efficient, self-monitoring, and physically secure optical interconnects for next-generation AI computing networks.


Past Defense Notices

Dates

Nishil Parmar

A Comparison of Quality of Rules Induced using Single Local Probabilistic Approximations vs Concept Probabilistic Approximations

When & Where:


1415A LEEP2

Committee Members:

Jerzy Grzymala-Busse, Chair
Prasad Kulkarni
Bo Luo


Abstract

This project report presents results of experiments on rule induction from incomplete data using probabilistic approximations. Mining incomplete data using probabilistic approximations is a well-established technique. Main goal of this report is to present research on a comparison carried out on two different approaches to mining incomplete data using probabilistic approximations: single local probabilistic approximations approach and concept probabilistic approximations. These approaches were implemented in python programming language and experiments were carried out on incomplete data sets with two interpretations of missing attribute values: lost values and do not care conditions. Our main objective was to compare concept and single local approximations in terms of the error rate computed using double hold-out method for validation. For our experiments we used seven incomplete data sets with many missing attribute values. The best results were accomplished by concept probabilistic approximations for five data sets and by single local probabilistic approximations for remaining two data sets.


Victor Berger da Silva

Probabilistic graphical techniques for automated ice-bottom tracking and comparison between state-of-the-art solutions

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
John Paden
Guanghui Wang


Abstract

Multichannel radar depth sounding systems are able to produce two-dimensional and three-dimensional imagery of the internal structure of polar ice sheets. One of the relevant features typically present in this imagery is the ice-bedrock interface, which is the boundary between the bottom of the ice-sheet and the bedrock underneath. Crucial information regarding the current state of the ice sheets, such as the thickness of the ice, can be derived if the location of the ice-bedrock interface is extracted from the imagery. Due to the large amount of data collected by the radar systems employed, we seek to automate the extraction of the ice-bedrock interface and allow for efficient manual corrections when errors occur in the automated method. We present improvements made to previously proposed solutions which pose feature extraction in polar radar imagery as an inference problem on a probabilistic graphical model. The improvements proposed here are in the form of novel image pre-processing steps and empirically-derived cost functions that allow for the integration of further domain-specific knowledge into the models employed. Along with an explanation of our modifications, we demonstrate the results obtained by our proposed models and algorithms, including significantly decreased mean error measurements such as a 47% reduction in average tracking error in the case of three-dimensional imagery. We also present the results obtained by several state-of-the-art ice-interface tracking solutions, and compare all automated results with manually-corrected ground-truth data. Furthermore, we perform a self-assessment of tracking results by analyzing the differences found between the automatically extracted ice-layers in cases where two separate radar measurements have been made at the same location.


Dain Vermaak

Visualizing, and Analyzing Student Progress on Learning Maps

When & Where:


1 Eaton Hall, Dean's Conference Room

Committee Members:

James Miller, Chair
Man Kong
Suzanne Shontz
Guanghui Wang
Bruce Frey

Abstract

A learning map is an unweighted directed graph containing relationships between discrete skills and concepts with edges defining the prerequisite hierarchy. They arose as a means of connecting student instruction directly to standards and curriculum and are designed to assist teachers in lesson planning and evaluating student response. As learning maps gain popularity there is an increasing need for teachers to quickly evaluate which nodes have been mastered by their students. Psychometrics is a field focused on measuring student performance and includes the development of processes used to link a student's response to multiple choice questions directly to their understanding of concepts. This dissertation focuses on developing modeling and visualization capabilities to enable efficient analysis of data pertaining to student understanding generated by psychometric techniques.

Such analysis naturally includes that done by classroom teachers. Visual solutions to this problem clearly indicate the current understanding of a student or classroom in such a way as to make suggestions that can guide future learning. In response to these requirements we present various experimental approaches which augment the original learning map design with targeted visual variables.

As well as looking forward, we also consider ways in which data visualization can be used to evaluate and improve existing teaching methods. We present several graphics based on modelling student progression as information flow. These methods rely on conservation of data to increase edge information, reducing the load carried by the nodes and encouraging path comparison.

In addition to visualization schemes and methods, we present contributions made to the field of Computer Science in the form of algorithms developed over the course of the research project in response to gaps in prior art. These include novel approaches to simulation of student response patterns, ranked layout of weighted directed graphs with variable edge widths, and enclosing certain groups of graph nodes in envelopes.

Finally, we present a final design which combines the features of key experimental approaches into a single visualization tool capable of meeting both predictive and validation requirements along with the methods used to measure the effectiveness and correctness of the final design.


Priyanka Saha

Complexity of Rule Sets Induced from Incomplete Data with Lost Values and Attribute-Concept Values

When & Where:


2001 B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Taejoon Kim
Cuncong Zhong


Abstract

Data is a very rich source of knowledge and information. However, special techniques need to be implemented in order to extract interesting facts and discover patterns in large data sets. This is achieved using the technique called Data Mining. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal to extract information from a data set and transform the information into a comprehensible structure for further use. Rule induction is a Data Mining technique in which formal rules are extracted from a set of observations. The rules induced may represent a full scientific model of the data, or merely represent local patterns in the data.

The data sets, however, is not always complete and might contain missing values. Data mining also provides techniques to handle the missing values in a data set. In this project, we’ve implemented lost value and attribute-concept value interpretations of incomplete data. Experiments were conducted on 176 datasets using three types of approximations (lower, middle and upper) of the concept and Modified Learning from Examples Module, version 2 (MLEM2) rule induction algorithm was used to induce rule sets.

The goal of the project was to prove that the complexity of rule sets derived from datasets having missing attributes is better for attribute-concept value interpretation compared to the lost value interpretation. The size of the rule set was always smaller for the attribute-concept value interpretation. Also, as a secondary objective, we tried to explore what type of approximation provides the smallest size of the rule sets.


Mohanad Al-Ibadi

Array Processing Techniques for Estimating and Tracking of an Ice-Sheet Bottom

When & Where:


317 Nichols Hall

Committee Members:

Shannon Blunt, Chair
John Paden
Christopher Allen
Erik Perrins
James Stiles

Abstract

   Ice bottom topography layers are an important boundary condition required to model the flow dynamics of an ice sheet. In this work, using low frequency multichannel radar data, we locate the ice bottom using two types of automatic trackers.

   First, we use the multiple signal classification (MUSIC) beamformer to determine the pseudo-spectrum of the targets at each range-bin. The result is passed into a sequential tree-reweighted message passing belief-propagation algorithm to track the bottom of the ice in the 3D image. This technique is successfully applied to process data collected over the Canadian Arctic Archipelago ice caps, and produce digital elevation models (DEMs) for 102 data frames. We perform crossover analysis to self-assess the generated DEMs, where flight paths cross over each other and two measurements are made at the same location. Also, the tracked results are compared before and after manual corrections. We found that there is a good match between the overlapping DEMs, where the mean error of the crossover DEMs is 38+7 m, which is small relative to the average ice-thickness, while the average absolute mean error of the automatically tracked ice-bottom, relative to the manually corrected ice-bottom, is 10 range-bins.

  Second, a direction of arrival (DOA)-based tracker is used to estimate the DOA of the backscatter signals sequentially from range bin to range bin using two methods: a sequential maximum a posterior probability (S-MAP) estimator and one based on the particle filter (PF). A dynamic flat earth transition model is used to model the flow of information between range bins. A simulation study is performed to evaluate the performance of these two DOA trackers. The results show that the PF-based tracker can handle low-quality data better than S-MAP, but, unlike S-MAP, it saturates quickly with increasing numbers of snapshots. Also, S-MAP is successfully applied to track the ice-bottom of several data frames collected over Russell glacier, and the results are compared against those generated by the beamformer-based tracker. The results of the DOA-based techniques are the final tracked surfaces, so there is no need for an additional tracking stage as there is with the beamformer technique.


Jason Gevargizian

MSRR: Leveraging dynamic measurement for establishing trust in remote attestation

When & Where:


246 Nichols Hall

Committee Members:

Prasad Kulkarni, Chair
Arvin Agah
Perry Alexander
Bo Luo
Kevin Leonard

Abstract

Measurers are critical to a remote attestation (RA) system to verify the integrity of a remote untrusted host. Runtime measurers in a dynamic RA system sample the dynamic program state of the host to form evidence in order to establish trust by a remote system (appraisal system). However, existing runtime measurers are tightly integrated with specific software. Such measurers need to be generated anew for each software, which is a manual process that is both challenging and tedious. 

In this paper we present a novel approach to decouple application-specific measurement policies from the measurers tasked with performing the actual runtime measurement. We describe the MSRR (MeaSeReR) Measurement Suite, a system of tools designed with the primary goal of reducing the high degree of manual effort required to produce measurement solutions at a per application basis.

The MSRR suite prototypes a novel general-purpose measurement system, the MSRR Measurement System, that is agnostic of the target application. Furthermore, we describe a robust high-level measurement policy language, MSRR-PL, that can be used to write per application policies for the MSRR Measurer. Finally, we provide a tool to automatically generate MSRR-PL policies for target applications by leveraging state of the art static analysis tools.

In this work, we show how the MSRR suite can be used to significantly reduce the time and effort spent on designing measurers anew for each application. We describe MSRR's robust querying language, which allows the appraisal system to accurately specify the what, when, and how to measure. We describe the capabilities and the limitations of our measurement policy generation tool. We evaluate MSRR's overhead and demonstrate its functionality by employing real-world case studies. We show that MSRR has an acceptable overhead on a host of applications with various measurement workloads.


Surya Nimmakayala

Heuristics to predict and eagerly translate code in DBTs

When & Where:


2001 B Eaton Hall

Committee Members:

Prasad Kulkarni, Chair
Perry Alexander
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

Dynamic Binary Translators(DBTs) have a variety of uses, like instrumentation,
profiling, security, portability, etc. In order for the desired application to run
with these enhanced additional features(not originally part of its design), it is to be run
under the control of Dynamic Binary Translator. The application can be thought of as the
guest application, to be run with in a controlled environment of the translator,
which would be the host application. That way, the intended application execution
flow can be enforced by the translator, thereby inducing the desired behavior in
the application on the host platform(combination of Operating System and Hardware).

However, there will be a run-time/execution-time overhead in the translator, when performing the
additional tasks to run the guest application in a controlled fashion. This run-time
overhead has been limiting the usage of DBT's on a large scale, where response times can be critical.
There is often a trade-off between the benefits of using a DBT against the overall application response
time. So, there is a need to research/explore ways to faster application execution through DBT's(given
their large code-base).

With the evolution of the multi-core and GPU hardware architectures, multilpe concurrent threads can get
more work done through parallelization. A proper design of parallel applications or parallelizing parts of existing
serial code, can lead to improved application run-time's through hardware architecture support.

We explore the possibility of improving the performance of a DBT named DynamoRIO. The basic idea is to improve
its performance by speeding-up the process of guest code translation, through multiple threads translating
multiple pieces of code concurrently. In an ideal case, all the required code blocks for application
execution are readily available ahead of time without any stalls. For efficient eager translation, there is
also a need for heuristics to better predict the next code block to be executed. That could potentially
bring down the less productive code translations at run-time. The goal is to get application speed-up through
eager translation and block prediction heuristics, with execution time close to native run.


FARHAD MAHMOOD

Modeling and Analysis of Energy Efficiency in Wireless Handset Transceiver Systems

When & Where:


Apollo Room, Nichols Hall

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Victor Frost
Lingjia Liu
Bozenna Pasik-Duncan

Abstract

As wireless communication devices are taking a significant part in our daily life, research steps toward making these devices even faster and smarter are accelerating rapidly. The main limiting factors are energy and power consumption. Many techniques are utilized to increase the battery’s capacity (Ampere per Hour), but that comes with a cost of raising the safety concerns. The other way to increase the battery’s life is to decrease the energy consumption of the devices. In this work, we analyze energy-efficient communications for wireless devices based on an advanced energy consumption model that takes into account a broad range of parameters. The developed model captures relationships between transmission power, transceiver distance, modulation order, channel fading, power amplifier (PA) effects, power control, multiple antennas, as well as other circuit components in the radio frequency (RF) transceiver. Based the developed model, we are able to identify the optimal modulation order in terms of energy efficiency under different situations (e.g., different transceiver distance, different PA classes and efficiencies, different pulse shape, etc). Furthermore, we capture the impact of system level and the impact of network level on the PA energy via peak to average ratio (PAR) and power control. We are also able to identify the impact of multiple antennas at the handset on the energy consumption and the transmitted bit rate for few and many antennas (conventional multiple-input-multiple-output (MIMO) and  massive MIMO) at the base station. This work provides an important framework for analyzing energy-efficient communications for different wireless systems ranging from cellular networks to wireless internet of things.


DANA HEMMINGSEN

Waveform Diverse Stretch Processing

When & Where:


Apollo Room, Nichols Hall

Committee Members:

Shannon Blunt, Chair
Christopher Allen
James Stiles


Abstract

​Stretch processing with the use of a wideband LFM transmit waveform is a commonly used technique, and its popularity is in large part due to the large time-bandwidth product that provides fine range resolution capabilities for applications that require it. It allows pulse compression of echoes at a much lower sampling bandwidth without sacrificing any range resolution. Previously, this technique has been restrictive in terms of waveform diversity because the literature shows that the LFM is the only type of waveform that will result in a tone after stretch processing. However, there are also many examples in the literature that demonstrate an ability to compensate for distortions from an ideal LFM waveform structure caused by various hardware components in the transmitter and receiver. This idea of compensating for variations is borrowed here, and the use of nonlinear FM (NLFM) waveforms is proposed to facilitate more variety in wideband waveforms that are usable with stretch processing. A compensation transform that permits the use of these proposed NLFM waveforms replaces the final fast Fourier transform (FFT) stage of the stretch processing configuration, but the rest of the RF receive chain remains the same. This modification to the receive processing structure makes possible the use of waveform diversity for legacy radar systems that already employ stretch processing. Similarly, using the same concept of compensating for distortions to the LFM structure along with the notion that a Fourier transform is essentially the matched filter bank for an LFM waveform mixed with an LFM reference, a least-squares based mismatched filtering (MMF) scheme is proposed. This MMF could likewise be used to replace the final FFT stage, and can also facilitate the application of NLFM waveforms to legacy radar systems.     The efficacy of these filtering approaches (compensation transform and least-squares based MMF) are demonstrated in simulation and experimentally using open-air measurements and are applied to different scenarios of NLFM waveform to assess the results and provide a means of comparison between the two techniques.


DANIEL GOMEZ GARCIA ALVESTEGUI

Scattering Analysis and Ultra-Wideband Radar for High-Throughput Phenotyping of Wheat Canopies

When & Where:


317 Nichols Hall

Committee Members:

Carl Leuschen, Chair
Christopher Allen
Ron Hui
Fernando Rodriguez-Morales
David Braaten

Abstract

Rising the yield of wheat crops is essential to meet the projected future demands of consumption and it is expected that most yield increases will be associated to improvements in biomass accumulation. Cultivars with canopy architectures that focus the light interception where photosynthetic-capacity is greater achieve larger biomass accumulation rates. Identifying varieties with improved traits could be performed with modern breeding methods, such as genomic-selection, which depend on genotype-phenotype mappings. Developing a non-destructive sensor with the capability of efficiently phenotyping wheat-canopy architecture parameters, such as height and vertical distribution of projected-leaf-area-density, would be useful for developing architecture-related genotype-phenotype maps of wheat cultivars. In this presentation, new scattering analysis tools and a new 2-18 GHz radar system are presented for efficiently phenotyping the architecture of wheat canopies.
The radar system presented was designed with the objective to measure the RCS profile of wheat canopies at close range. The frequency range (2-18 GHz), topology (Frequency-modulated-continuous-wave) and other radar parameters were chosen to meet that goal. Phase noise of self-interference signals is the main source of coherent and incoherent noise in FMCW radars. A new comprehensive noise analysis is presented, which predicts the power-spectral-density of the noise at the output of FMCW radars,
including those related to phase noise. The new 2-18 GHz chirp generator is based on a phase-locked-loop that was designed with large loop bandwidth to suppress the phase noise of the chirp. Additionally, the radar RF front-end was designed to achieve low levels of LO-leakage and antenna feed-through, which are the main self-interference signals of FMCW radars.
In addition to the radar system, a new efficient radar simulator was developed to predict the RCS waveforms collected from wheat canopies over the 2-18 GHz frequency range. The coherent radar simulator is based on novel geometric and fully-polarimetric scattering models of wheat canopies. The scattering models of wheat canopies, leaves with arbitrary orientation and curvature, stems and heads were validated using a full-wave commercial simulator and measurements. The radar simulator was used to derive retrieval algorithms of canopy height and projected-leaf-area-density from RCS profiles, which were tested with field-collected measurements.