Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the end-users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring distinctive phenomena that impact its overall security and usability. Such factors include (i) hidden code clones--which may stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts of open-source development practices, (v) package compromise via malicious updates, (vi) spammers disseminating phishing links within package metadata, and (vii) abuse of cryptocurrency protocols designed to reward the creators of high-impact packages. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Alfred Fontes

Optimization and Trade-Space Analysis of Pulsed Radar-Communication Waveforms using Constant Envelope Modulations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Dual function radar communications (DFRC) is a method of co-designing a single radio frequency system to perform simultaneous radar and communications service. DFRC is ultimately a compromise between radar sensing performance and communications data throughput due to the conflicting requirements between the sensing and information-bearing signals.

A novel waveform-based DFRC approach is phase attached radar communications (PARC), where a communications signal is embedded onto a radar pulse via the phase modulation between the two signals. The PARC framework is used here in a new waveform design technique that designs the radar component of a PARC signal to match the PARC DFRC waveform expected power spectral density (PSD) to a desired spectral template. This provides better control over the PARC signal spectrum, which mitigates the issue of PARC radar performance degradation from spectral growth due to the communications signal. 

The characteristics of optimized PARC waveforms are then analyzed to establish a trade-space between radar and communications performance within a PARC DFRC scenario. This is done by sampling the DFRC trade-space continuum with waveforms that contain a varying degree of communications bandwidth, from a pure radar waveform (no embedded communications) to a pure communications waveform (no radar component). Radar performance, which is degraded by range sidelobe modulation (RSM) from the communications signal randomness, is measured from the PARC signal variance across pulses; data throughput is established as the communications performance metric. Comparing the values of these two measures as a function of communications symbol rate explores the trade-offs in performance between radar and communications with optimized PARC waveforms.


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Different Pumping Configurations

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Morteza Hashemi
Rachel Jarvis
Alessandro Salandrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To maintain higher data rates along with maximizing the spectral efficiency of multi-level modulated signals, a higher Optical Signal-to-Noise Ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity. Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems.

Distributed Raman Amplification (DRA) have been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Also, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping configuration in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping scheme helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of system performance in FW DRA systems at the receiver.

As the performance of DRA with backward pumping is well understood with relatively low impact of RIN transfer, our research  is focused on the FW pumping configuration, and is intended to provide a comprehensive analysis on the system performance impact of dual order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual order FW Raman configurations is compared with that of single order Raman pumping to understand trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump. Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Audrey Mockenhaupt

Using Dual Function Radar Communication Waveforms for Synthetic Aperture Radar Automatic Target Recognition

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jon Owen


Abstract

Pending.


Rich Simeon

Delay-Doppler Channel Estimation for High-Speed Aeronautical Mobile Telemetry Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Erik Perrins, Chair
Shannon Blunt
Morteza Hashemi
Jim Stiles
Craig McLaughlin

Abstract

The next generation of digital communications systems aims to operate in high-Doppler environments such as high-speed trains and non-terrestrial networks that utilize satellites in low-Earth orbit. Current generation systems use Orthogonal Frequency Division Multiplexing modulation which is known to suffer from inter-carrier interference (ICI) when different channel paths have dissimilar Doppler shifts.

A new Orthogonal Time Frequency Space (OTFS) modulation (also known as Delay-Doppler modulation) is proposed as a candidate modulation for 6G networks that is resilient to ICI. To date, OTFS demodulation designs have focused on the use cases of popular urban terrestrial channel models where path delay spread is a fraction of the OTFS symbol duration. However, wireless wide-area networks that operate in the aeronautical mobile telemetry (AMT) space can have large path delay spreads due to reflections from distant geographic features. This presents problems for existing channel estimation techniques which assume a small maximum expected channel delay, since data transmission is paused to sound the channel by an amount equal to twice the maximum channel delay. The dropout in data contributes to a reduction in spectral efficiency.

Our research addresses OTFS limitations in the AMT use case. We start with an exemplary OTFS framework with parameters optimized for AMT. Following system design, we focus on two distinct areas to improve OTFS performance in the AMT environment. First we propose a new channel estimation technique using a pilot signal superimposed over data that can measure large delay spread channels with no penalty in spectral efficiency. A successive interference cancellation algorithm is used to iteratively improve channel estimates and jointly decode data. A second aspect of our research aims to equalize in delay-Doppler space. In the delay-Doppler paradigm, the rapid channel variations seen in the time-frequency domain is transformed into a sparse quasi-stationary channel in the delay-Doppler domain. We propose to use machine learning using Gaussian Process Regression to take advantage of the sparse and stationary channel and learn the channel parameters to compensate for the effects of fractional Doppler in which simpler channel estimation techniques cannot mitigate. Both areas of research can advance the robustness of OTFS across all communications systems.


Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Past Defense Notices

Dates

Blake Douglas Bryant

Building Better with Blocks – A Novel Secure Multi-Channel Internet Memory Information Control (S-MIMIC) Protocol for Complex Latency Sensitive Applications

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hossein Saiedian, Chair
Arvin Agah
Perry Alexander
Bo Luo
Reza Barati

Abstract

Multimedia networking is the area of study associated with the delivery of heterogeneous data including, but not limited to, imagery, video, audio, and interactive content. Multimedia and communication network researchers have continually struggled to devise solutions for addressing the three core challenges in multimedia delivery: security, reliability, and performance. Solutions to these challenges typically exist in a spectrum of compromises achieving gains in one aspect at the cost of one or more of the others. Networked videogames represent the pinnacle of multimedia presented in a real-time interactive format. Continual improvements to multimedia delivery have led to tools such as buffering, redundant coupling of low-resolution alternative data streams, congestion avoidance, and forced in-order delivery of best-effort service; however, videogames cannot afford to pay the latency tax of these solutions in their current state.

I developed the Secure Multi-Channel Internet Memory Information Control (S-MIMIC) protocol as a novel solution to address these challenges. The S-MIMIC protocol leverages recent developments in blockchain and distributed ledger technology, coupled with creative enhancements to data representation and a novel data model. The S-MIMIC protocol also implements various novel algorithms for create, read, update, and delete (CRUD) interactions with distributed ledger and blockchain technologies. For validation, the S-MIMIC protocol was integrated with an open source open source First-Person Shooter (FPS) videogame to demonstrate its ability to transfer complex data structures under extreme network latency demands. The S-MIMIC protocol demonstrated improvements in confidentiality, integrity, availability and data read operations under all test conditions. Data write performance of S-MIMIC is slightly below traditional TCP-based networking in unconstrained networks, but matches performance in networks exhibiting 150 milliseconds of delay or more.

Though the S-MIMIC protocol was evaluated for use in networked videogames, its potential uses are far reaching with promising applicability to medical information, legal documents, financial transactions, information security threat feeds and many other use cases that require security, reliability and performance guarantees.


Zeyan Liu

Towards Robust Deep Learning Systems against Stealthy Attacks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Zijun Yao
John Symons

Abstract

The deep neural network (DNN) models are the core components of the machine learning solutions. However, their wide adoption in real-world applications raises increasing security concerns. Various attacks have been proposed against DNN models, such as the evasion and backdoor attacks. Attackers utilize adversarially altered samples, which are supposed to be stealthy and imperceptible to human eyes, to fool the targeted model into misbehaviors. This could result in severe consequences, such as self-driving cars ignoring traffic signs or colliding with pedestrians.

In this work, we aim to investigate the security and robustness of deep learning systems against stealthy attacks. To do this, we start by reevaluating the stealthiness assumptions made by the start-of-the-art attacks through a comprehensive study. We implement 20 representative attacks on six benchmark datasets. We evaluate the visual stealthiness of the attack samples using 24 metrics for image similarity or quality and over 30,000 annotations in a user study. Our results show that the majority of the existing attacks introduce non-negligible perturbations that are not stealthy. Next, we propose a novel model-poisoning neural Trojan, namely LoneNeuron, which introduces only minimum modification to the host neural network with a single neuron after the first convolution layer. LoneNeuron responds to feature-domain patterns that transform into invisible, sample-specific, and polymorphic pixel-domain watermarks. With high attack specificity, LoneNeuron achieves a 100% attack success rate and does not compromise the primary task performance. Additionally, its unique watermark polymorphism further improves watermark randomness, stealth, and resistance to Trojan detection.


Jonathan Owen

Real-Time Cognitive Sense-and-Notch Radar

When & Where:


Nichols Hall, Room 129, Ron Evans Apollo Auditorium

Committee Members:

Shannon Blunt, Chair
Chris Allen
Carl Leuschen
James Stiles
Zsolt Talata

Abstract

Spectrum sensing and transmit waveform frequency notching is a form of cognitive radar that seeks to reduce mutual interference with other spectrum users in a cohabitated band. With the reality of increasing radio frequency (RF) spectral congestion, radar systems capable of dynamic spectrum sharing are needed. The cognitive sense-and-notch (SAN) emission strategy is experimentally demonstrated as an effective way to reduce the interference that the spectrum sharing radar causes to other in-band users. The physical radar emission is based on a random FM waveform structure possessing attributes that are inherently robust to range-Doppler sidelobes. To contend with dynamic interference the transmit notch may be required to move during the coherent processing interval (CPI), which introduces a nonstationarity effect that results in increased residual clutter after cancellation. The nonstationarity effect is characterized and compensated for using computationally efficient processing methods. The steps from initial analysis of cognitive system performance to implementation of sense-and-notch radar spectrum sharing in real-time are discussed.


Nick Kellerman

A MISO Frequency Diverse Array Implementation

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Patrick McCormick, Chair
Chris Allen
Shannon Blunt
James Stiles

Abstract

Estimating the spatial angle of arrival for a received radar signal traditionally entails measurements across multiple antenna elements. Spatially diverse Multiple Input Multiple Output (MIMO) emission structures, such as the Frequency Diverse Array (FDA), provide waveform separability to achieve spatial estimation without the need for multiple receive antenna elements. A low complexity Multiple Input Single Output (MISO) radar system leveraging the FDA emission structure coupled with the Linear Frequency Modulated Continuous Wave (LFMCW) waveform is experimentally demonstrated that estimates range, Doppler and spatial angle information of the illuminated scene using a single receiver antenna element. In comparison to well-known spatially diverse emission structures (i.e., Doppler Division Multiple Access (DDMA) and Time Division Multiple Access (TDMA)), LFMCW-FDA is shown to retain the full range and Doppler unambiguous spaces at the cost of a reduced range resolution. To combat the degraded range performance, an adaptive algorithm is introduced with initial results showing the ability to improve separability of closely spaced scatterers in range and angle. With the persistent illumination achieved by the emission structure, demonstrated performance, and low complexity architecture, the LFMCW-FDA system is shown to have attractive features for use in a low-resolution search radar context.


Christian Jones

Robust and Efficient Structure-Based Radar Receive Processing

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Shannon Blunt, Chair
Chris Allen
Suzanne Shontz
James Stiles
Zsolt Talata

Abstract

Legacy radar systems largely rely on repeated emission of a linear frequency modulated (LFM) or chirp waveform to ascertain scattering information from an environment. The prevalence of these chirp waveforms largely stems from their simplicity to generate, process, and the general robustness they provide towards hardware effects. However, this traditional design philosophy often lacks the flexibility and dimensionality needed to address the dynamic “complexification” of the modern radio frequency (RF) environment or achieve current operational requirements where unprecedented degrees of sensitivity, maneuverability, and adaptability are necessary.

Over the last couple of decades analog-to-digital and digital-to-analog technologies have advanced exponentially, resulting in tremendous design degrees of freedom and arbitrary waveform generation (AWG) capabilities that enable sophisticated design of emissions to better suit operational requirements. However, radar systems typically require high powered amplifiers (HPA) to contend with the two-way propagation. Thus, transmitter-amenable waveforms are effectively constrained to be both spectrally contained and constant amplitude, resulting in a non-convex NP-hard design problem.

While determining the global optimal waveform can be intractable for even modest time-bandwidth products (TB), locally optimal transmitter-amenable solutions that are “good enough” are often readily available. However, traditional matched filtering may not satisfy operational requirements for these sub-optimal emissions. Using knowledge of the transmitter-receiver chain, a discrete linear model can be formed to express the relationship between observed measurements and the complex scattering of the environment. This structured representation then enables more sophisticated least-square and adaptive estimation techniques to better satisfy operational needs, improve estimate fidelity, and extend dynamic range.

However, radar dimensionality can be enormous and brute force implementations of these techniques may have unwieldy computational burden on even cutting-edge hardware. Additionally, a discrete linear representation is fundamentally an approximation of the dynamic continuous physical reality and model errors may induce bias, create false detections, and limit dynamic range. As such, these structure-based approaches must be both computationally efficient and robust to reality.

Here several generalized discrete radar receive models and structure-based estimation schemes are introduced. Modifications and alternative solutions are then proposed to improve estimate fidelity, reduce computational complexity, and provide further robustness to model uncertainty.


Archana Chalicheemala

A Machine Learning Study using Gene Expression Profiles to Distinguish Patients with Non-Small Cell Lung Cancer

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Early diagnosis can effectively treat non-small cell lung cancer (NSCLC). Lung cancer cells usually have altered gene expression patterns compared to normal cells, which can be utilized to predict cancer through gene expression tests. This study analyzed gene expression values measured from 15227-probe microarray, and 290 patients consisting of cancer and control groups, to find relations between the gene expression features and lung cancer. The study explored k-means, statistical tests, and deep neural networks to obtain optimal feature representations and achieved the highest accuracy of 82%. Furthermore, a bipartite graph was built using the Bio Grid database and gene expression values, where the probe-to-probe relationship based on gene relevance was leveraged to enhance the prediction performance.


Yoganand Pitta

Insightful Visualization: An Interactive Dashboard Uncovering Disease Patterns in Patient Healthcare Data

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

As Electronic Health Records (EHRs) become more available, there is increasing interest in discovering hidden disease patterns by leveraging cutting-edge data visualization techniques, such as graph-based knowledge representation and interactive graphical user interfaces (GUIs). In this project, we have developed a web-based interactive EHR analytics and visualization tool to provide healthcare professionals with valuable insights that can ultimately improve the quality and cost-efficiency of patient care. Specifically, we have developed two visualization panels: one for the intelligence of individual patients and the other for the relevance among diseases. For individual patients, we capture the similarity between them by linking them based on their relatedness in diagnosis. By constructing a graph representation of patients based on this similarity, we can identify patterns and trends in patient data that may not be apparent through traditional methods. For disease relationships, we provide an ontology graph for the specific diagnosis (ICD10 code), which helps to identify ancestors and predecessors of a particular diagnosis. Through the demonstration of this dashboard, we show that this approach can provide valuable insights to better understand patient outcomes with an informative and user-friendly web interface.

 


Brandon Ravenscroft

Spectral Cohabitation and Interference Mitigation via Physical Radar Emissions

When & Where:


Nichols Hall, Room 129, Ron Evans Apollo Auditorium

Committee Members:

Shannon Blunt, Chair
Chris Allen
Erik Perrins
James Stiles
Chris Depcik

Abstract

Auctioning of frequency bands to support growing demand for high bandwidth 5G communications is driving research into spectral cohabitation strategies for next generation radar systems. The loss of radio frequency (RF) spectrum once designated for radar operation is forcing radar systems to either learn how to coexist in these frequency spectrum bands, without causing mutual interference, or move to other bands of the spectrum, the latter being the more undesirable choice. Two methods of spectral cohabitation are presented in this work, each taking advantage of recent developments in non-repeating, random FM (RFM) waveforms. RFM waveforms are designed via one of many different optimization procedures to have favorable radar waveform properties while also readily incorporating agile spectrum notches. The first method of spectral cohabitation uses these spectral notches to avoid narrow-band RF interference (RFI) in the form of other spectrum users residing in the same band as the radar system, allowing both to operate while minimizing mutual interference. The second method of spectral cohabitation uses spectral notches, along with an optimization procedure, to embed a communications signal into a dual-function radar/communications (DFRC) emission, allowing one waveform to serve both functions simultaneously. Results of simulation and open-air experimentation with physically realized, spectrally notched and DFRC emissions are shown which demonstrate the efficacy of these two methods of spectral cohabitation.


Divya Harshitha Challa

Crop Prediction Based on Soil Classification using Machine Learning with Classifier Ensembling

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hongyang Sun


Abstract

Globally, agriculture is the most significant source, which is the backbone of any country, and is an emerging field of research these days. There are many different types of soil, and each type has different characteristics for crops. Different methods and models are used daily in this region to increase yields. The macronutrient and micronutrient content of the soil, which is also a parametric representation of various climatic conditions like rain, humidity, temperature, and the soil's pH, is largely responsible for the crop's growth. Consequently, farmers are unable to select the appropriate crops depending on environmental and soil factors. The method of manually predicting the selection of the appropriate crops on land has frequently failed. We use machine learning techniques in this system to recommend crops based on soil classification or soil series. A comparative analysis of several popular classification algorithms, including K-Nearest Neighbors (KNN), Random Forest (RF), Decision Tree (DT), Support Vector Machines (SVM), Gaussian Naive Bayes (GNB), Gradient Boosting (GB), Extreme Gradient Boosting (XGBoost), and Voting Ensemble classifiers, is carried out in this work to assist in recommending the cultivable crop(s) that are most suitable for a particular piece of land depending on the characteristics of the soil and environment. To achieve our goal, we collected and preprocessed a large dataset of crop yield and environmental data from multiple sources. Our results show that the voting ensemble classifier outperforms the other classifiers in terms of prediction accuracy, achieving an accuracy of 94.67%. Feature importance analysis reveals that weather conditions such as temperature and rainfall, and fertilizer usage are the most critical factors in predicting crop yield. 


Oluwanisola Ibikunle

DEEP LEARNING ALGORITHMS FOR RADAR ECHOGRAM LAYER TRACKING

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Shannon Blunt, Chair
John Paden (Co-Chair)
Carl Leuschen
Jilu Li
James Stiles

Abstract

The accelerated melting of ice sheets in the polar regions of the world, specifically in Greenland and Antarctica, due to contemporary climate warming is contributing to global sea level rise. To understand and quantify this phenomenon, airborne radars have been deployed to create echogram images that map snow accumulation patterns in these regions. Using advanced radar systems developed by the Center for Remote Sensing and Integrated Systems (CReSIS), a significant amount (1.5 petabytes) of climate data has been collected. However, the process of extracting ice phenomenology information, such as accumulation rate, from the data is limited. This is because the radar echograms require tracking of the internal layers, a task that is still largely manual and time-consuming. Therefore, there is a need for automated tracking.

Machine learning and deep learning algorithms are well-suited for this problem given their near-human performance on optical images. Moreover, the significant overlap between classical radar signal processing and machine learning techniques suggests that fusion of concepts from both fields can lead to optimized solutions for the problem. However, supervised deep learning algorithms suffer the circular problem of first requiring large amounts of labeled data to train the models which do not exist currently.

In this work, we propose custom algorithms, including supervised, semi-supervised, and self-supervised approaches, to deal with the limited annotated data problem to achieve accurate tracking of radiostratigraphic layers in echograms. Firstly, we propose an iterative multi-class classification algorithm, called “Row Block,” which sequentially tracks internal layers from the top to the bottom of an echogram given the surface location. We aim to use the trained iterative model in an active learning paradigm to progressively increase the labeled dataset. We also investigate various deep learning semantic segmentation algorithms by casting the echogram layer tracking problem as a binary and multiclass classification problem. These require post-processing to create the desired vector-layer annotations, hence, we propose a custom connected-component algorithm as a post-processing routine. Additionally, we propose end-to-end algorithms that avoid the post-processing to directly create annotations as vectors. Furthermore, we propose semi-supervised algorithms using weakly-labeled annotations and unsupervised algorithms that can learn the latent distribution of echogram snow layers while reconstructing echogram images from a sparse embedding representation.

A concurrent objective of this work is to provide the deep learning and science community with a large fully-annotated dataset. To achieve this, we propose synchronizing radar data with outputs from a regional climate model to provide a dataset with overlapping measurements that can enhance the performance of the trained models.