Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Ye Wang

Deceptive Signals: Unveiling and Countering Sensor Spoofing Attacks on Cyber Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Rongqing Hui
Bo Luo
Haiyang Chao

Abstract

In modern computer systems, sensors play a critical role in enabling a wide range of functionalities, from navigation in autonomous vehicles to environmental monitoring in smart homes. Acting as an interface between physical and digital worlds, sensors collect data to drive automated functionalities and decision-making. However, this reliance on sensor data introduces significant potential vulnerabilities, leading to various physical, sensor-enabled attacks such as spoofing, tampering, and signal injection. Sensor spoofing attacks, where adversaries manipulate sensor input or inject false data into target systems, pose serious risks to system security and privacy.

In this work, we have developed two novel sensor spoofing attack methods that significantly enhance both efficacy and practicality. The first method employs physical signals that are imperceptible to humans but detectable by sensors. Specifically, we target deep learning based facial recognition systems using infrared lasers. By leveraging advanced laser modeling, simulation-guided targeting, and real-time physical adjustments, our infrared laser-based physical adversarial attack achieves high success rates with practical real-time guarantees, surpassing the limitations of prior physical perturbation attacks. The second method embeds physical signals, which are inherently present in the system, into legitimate patterns. In particular, we integrate trigger signals into standard operational patterns of actuators on mobile devices to construct remote logic bombs, which are shown to be able to evade all existing detection mechanisms. Achieving a zero false-trigger rate with high success rates, this novel sensor bomb is highly effective and stealthy.

Our study on emerging sensor-based threats highlights the urgent need for comprehensive defenses against sensor spoofing. Along this direction, we design and investigate two defense strategies to mitigate these threats. The first strategy involves filtering out physical signals identified as potential attack vectors. The second strategy is to leverage beneficial physical signals to obfuscate malicious patterns and reinforce data integrity. For example, side channels targeting the same sensor can be used to introduce cover signals that prevent information leakage, while environment-based physical signals serve as signatures to authenticate data. Together, these strategies form a comprehensive defense framework that filters harmful sensor signals and utilizes beneficial ones, significantly enhancing the overall security of cyber systems.


Sravan Reddy Chintareddy

Combating Spectrum Crunch with Efficient Machine-Learning Based Spectrum Access and Harnessing High-frequency Bands for Next-G Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Erik Perrins
Dongjie Wang
Shawn Keshmiri

Abstract

There is an increasing trend in the number of wireless devices that is now already over 14 billion and is expected to grow to 40 billion devices by 2030. In addition, we are witnessing an unprecedented proliferation of applications and technologies with wireless connectivity requirements such as unmanned aerial vehicles, connected health, and radars for autonomous vehicles. The advent of new wireless technologies and devices will only worsen the current spectrum crunch that service providers and wireless operators are already experiencing. In this PhD study, we address these challenges through the following research thrusts, in which we consider two emerging applications aimed at advancing spectrum efficiency and high-frequency connectivity solutions.

 

First, we focus on effectively utilizing the existing spectrum resources for emerging applications such as networked UAVs operating within the Unmanned Traffic Management (UTM) system. In this thrust, we develop a coexistence framework for UAVs to share spectrum with traditional cellular networks by using machine learning (ML) techniques so that networked UAVs act as secondary users without interfering with primary users. We propose federated learning (FL) and reinforcement learning (RL) solutions to establish a collaborative spectrum sensing and dynamic spectrum allocation framework for networked UAVs. In the second part, we explore the potential of millimeter-wave (mmWave) and terahertz (THz) frequency bands for high-speed data transmission in urban settings. Specifically, we investigate THz-based midhaul links for 5G networks, where a network's central units (CUs) connect to distributed units (DUs). Through numerical analysis, we assess the feasibility of using 140 GHz links and demonstrate the merits of high-frequency bands to support high data rates in midhaul networks for future urban communications infrastructure. Overall, this research is aimed at establishing frameworks and methodologies that contribute toward the sustainable growth and evolution of wireless connectivity.


Agraj Magotra

Data-Driven Insights into Sustainability: An Artificial Intelligence (AI) Powered Analysis of ESG Practices in the Textile and Apparel Industry

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Prasad Kulkarni
Zijun Yao


Abstract

The global textile and apparel (T&A) industry is under growing scrutiny for its substantial environmental and social impact, producing 92 million tons of waste annually and contributing to 20% of global water pollution. In Bangladesh, one of the world's largest apparel exporters, the integration of Environmental, Social, and Governance (ESG) practices is critical to meet international sustainability standards and maintain global competitiveness. This master's study leverages Artificial Intelligence (AI) and Machine Learning (ML) methodologies to comprehensively analyze unstructured corporate data related to ESG practices among LEED-certified Bangladeshi T&A factories. 

Our study employs advanced techniques, including Web Scraping, Natural Language Processing (NLP), and Topic Modeling, to extract and analyze sustainability-related information from factory websites. We develop a robust ML framework that utilizes Non-Negative Matrix Factorization (NMF) for topic extraction and a Random Forest classifier for ESG category prediction, achieving an 86% classification accuracy. The study uncovers four key ESG themes: Environmental Sustainability, Social : Workplace Safety and Compliance, Social: Education and Community Programs, and Governance. The analysis reveals that 46% of factories prioritize environmental initiatives, such as energy conservation and waste management, while 44% emphasize social aspects, including workplace safety and education. Governance practices are significantly underrepresented, with only 10% of companies addressing ethical governance, healthcare provisions and employee welfare.

To deepen our understanding of the ESG themes, we conducted a Centrality Analysis to identify the most influential keywords within each category, using measures such as degree, closeness, and eigenvector centrality. Furthermore, our analysis reveals that higher certification levels, like Platinum, are associated with a more balanced emphasis on environmental, social, and governance practices, while lower levels focus primarily on environmental efforts. These insights highlight key areas where the industry can improve and inform targeted strategies for enhancing ESG practices. Overall, this ML framework provides a data-driven, scalable approach for analyzing unstructured corporate data and promoting sustainability in Bangladesh’s T&A sector, offering actionable recommendations for industry stakeholders, policymakers, and global brands committed to responsible sourcing.


Shalmoli Ghosh

High-Power Fabry-Perot Quantum-Well Laser Diodes for Application in Multi-Channel Coherent Optical Communication Systems

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui , Chair
Shannon Blunt
Jim Stiles


Abstract

Wavelength Division Multiplexing (WDM) is essential for managing rapid network traffic growth in fiber optic systems. Each WDM channel demands a narrow-linewidth, frequency-stabilized laser diode, leading to complexity and increased energy consumption. Multi-wavelength laser sources, generating optical frequency combs (OFC), offer an attractive solution, enabling a single laser diode to provide numerous equally spaced spectral lines for enhanced bandwidth efficiency.

Quantum-dot and quantum-dash OFCs provide phase-synchronized lines with low relative intensity noise (RIN), while Quantum Well (QW) OFCs offer higher power efficiency, but they have higher RIN in the low frequency region of up to 2 GHz. However, both quantum-dot/dash and QW based OFCs, individual spectral lines exhibit high phase noise, limiting coherent detection. Output power levels of these OFCs range between 1-20 mW where the power of each spectral line is typically less than -5 dBm. Due to this requirement, these OFCs require excessive optical amplification, also they possess relatively broad spectral linewidths of each spectral line, due to the inverse relationship between optical power and linewidth as per the Schawlow-Townes formula. This constraint hampers their applicability in coherent detection systems, highlighting a challenge for achieving high-performance optical communication.

In this work, coherent system application of a single-section Quantum-Well Fabry-Perot (FP) laser diode is demonstrated. This laser delivers over 120 mW optical power at the fiber pigtail with a mode spacing of 36.14 GHz. In an experimental setup, 20 spectral lines from a single laser transmitter carry 30 GBaud 16-QAM signals over 78.3 km single-mode fiber, achieving significant data transmission rates. With the potential to support a transmission capacity of 2.15 Tb/s (4.3 Tb/s for dual polarization) per transmitter, including Forward Error Correction (FEC) and maintenance overhead, it offers a promising solution for meeting the escalating demands of modern network traffic efficiently.


Anissa Khan

Privacy Preserving Biometric Matching

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Perry Alexander, Chair
Prasad Kulkarni
Fengjun Li


Abstract

Biometric matching is a process by which distinct features are used to identify an individual. Doing so privately is important because biometric data, such as fingerprints or facial features, is not something that can be easily changed or updated if put at risk. In this study, we perform a piece of the biometric matching process in a privacy preserving manner by using secure multiparty computation (SMPC). Using SMPC allows the identifying biological data, called a template, to remain stored by the data owner during the matching process. This provides security guarantees to the biological data while it is in use and therefore reduces the chances the data is stolen. In this study, we find that performing biometric matching using SMPC is just as accurate as performing the same match in plaintext.

 


Bryan Richlinski

Prioritize Program Diversity: Enumerative Synthesis with Entropy Ordering

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Sankha Guria, Chair
Perry Alexander
Drew Davidson
Jennifer Lohoefener

Abstract

Program synthesis is a popular way to create a correct-by-construction program from a user-provided specification. Term enumeration is a leading technique to systematically explore the space of programs by generating terms from a formal grammar. These terms are treated as candidate programs which are tested/verified against the specification for correctness. In order to prioritize candidates more likely to satisfy the specification, enumeration is often ordered by program size or other domain-specific heuristics. However, domain-specific heuristics require expert knowledge, and enumeration by size often leads to terms comprised of frequently repeating symbols that are less likely to satisfy a specification. In this thesis, we build a heuristic that prioritizes term enumeration based on variability of individual symbols in the program, i.e., information entropy of the program. We use this heuristic to order programs in both top-down and bottom-up enumeration. We evaluated our work on a subset of the PBE-String track of the 2017 SyGuS competition benchmarks and compared against size-based enumeration. In top-down enumeration, our entropy heuristic shortens runtime in ~56% of cases and tests fewer programs in ~80% before finding a valid solution. For bottom-up enumeration, our entropy heuristic improves the number of enumerated programs in ~30% of cases before finding a valid solution, without improving the runtime. Our findings suggest that using entropy to prioritize program enumeration is a promising step forward for faster program synthesis.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

 

This research provides a deep dive into the npm-centric software supply chain, exploring various facets and phenomena that impact the security of this software supply chain. Such factors include (i) hidden code clones--which obscure provenance and can stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts open-source development practices, and (v) package compromise via malicious updates. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Jagadeesh Sai Dokku

Intelligent Chat Bot for KU Website: Automated Query Response and Resource Navigation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

This project introduces an intelligent chatbot designed to improve user experience on our university website by providing instant, automated responses to common inquiries. Navigating a university website can be challenging for students, applicants, and visitors who seek quick information about admissions, campus services, events, and more. To address this challenge, we developed a chatbot that simulates human conversation using Natural Language Processing (NLP), allowing users to find information more efficiently. The chatbot is powered by a Bidirectional Long Short-Term Memory (BiLSTM) model, an architecture well-suited for understanding complex sentence structures. This model captures contextual information from both directions in a sentence, enabling it to identify user intent with high accuracy. We trained the chatbot on a dataset of intent-labeled queries, enabling it to recognize specific intentions such as asking about campus facilities, academic programs, or event schedules. The NLP pipeline includes steps like tokenization, lemmatization, and vectorization. Tokenization and lemmatization prepare the text by breaking it into manageable units and standardizing word forms, making it easier for the model to recognize similar word patterns. The vectorization process then translates this processed text into numerical data that the model can interpret. Flask is used to manage the backend, allowing seamless communication between the user interface and the BiLSTM model. When a user submits a query, Flask routes the input to the model, processes the prediction, and delivers the appropriate response back to the user interface. This chatbot demonstrates a successful application of NLP in creating interactive, efficient, and user-friendly solutions. By automating responses, it reduces reliance on manual support and ensures users can access relevant information at any time. This project highlights how intelligent chatbots can transform the way users interact with university websites, offering a faster and more engaging experience.

 


Anahita Memar

Optimizing Protein Particle Classification: A Study on Smoothing Techniques and Model Performance

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Hossein Saiedian
Prajna Dhar


Abstract

This thesis investigates the impact of smoothing techniques on enhancing classification accuracy in protein particle datasets, focusing on both binary and multi-class configurations across three datasets. By applying methods including Averaging-Based Smoothing, Moving Average, Exponential Smoothing, Savitzky-Golay, and Kalman Smoothing, we sought to improve performance in Random Forest, Decision Tree, and Neural Network models. Initial baseline accuracies revealed the complexity of multi-class separability, while clustering analyses provided valuable insights into class similarities and distinctions, guiding our interpretation of classification challenges.

These results indicate that Averaging-Based Smoothing and Moving Average techniques are particularly effective in enhancing classification accuracy, especially in configurations with marked differences in surfactant conditions. Feature importance analysis identified critical metrics, such as IntMean and IntMax, which played a significant role in distinguishing classes. Cross-validation validated the robustness of our models, with Random Forest and Neural Network consistently outperforming others in binary tasks and showing promising adaptability in multi-class classification. This study not only highlights the efficacy of smoothing techniques for improving classification in protein particle analysis but also offers a foundational approach for future research in biopharmaceutical data processing and analysis.


Yousif Dafalla

Web-Armour: Mitigating Reconnaissance and Vulnerability Scanning with Injecting Scan-Impeding Delays in Web Deployments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
ZJ Wang

Abstract

Scanning hosts on the internet for vulnerable devices and services is a key step in numerous cyberattacks. Previous work has shown that scanning is a widespread phenomenon on the internet and commonly targets web application/server deployments. Given that automated scanning is a crucial step in many cyberattacks, it would be beneficial to make it more difficult for adversaries to perform such activity.

In this work, we propose Web-Armour, a mitigation approach to adversarial reconnaissance and vulnerability scanning of web deployments. The proposed approach relies on injecting scanning impeding delays to infrequently or rarely used portions of a web deployment. Web-Armour has two goals: First, increase the cost for attackers to perform automated reconnaissance and vulnerability scanning; Second, introduce minimal to negligible performance overhead to benign users of the deployment. We evaluate Web-Armour on live environments, operated by real users, and on different controlled (offline) scenarios. We show that Web-Armour can effectively lead to thwarting reconnaissance and internet-wide scanning.


Past Defense Notices

Dates

TJ Barlcay

Proof-Producing Translation from Gallina to CakeML

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Users of theorem provers often desire to to extract their verified code to a

  more efficient, compiled language. Coq's current extraction mechanism provides

  this facility but does not provide a formal guarantee that the extracted code

  has the same semantics as the logic it is extracted from. Providing such a

  guarantee requires a formal semantics for the target code. The CakeML

  project, implemented in HOL4, provides a formally defined syntax and semantics

  for a subset of SML and includes a proof-producing translator from

  higher-order logic to CakeML. We use the CakeML definition to develop a

  certifying extractor to CakeML from Gallina using the translation and proof techniques

  of the HOL4 CakeML translator. We also address how differences

  between HOL4 (higher-order logic) and Coq (calculus of constructions) effect

  the implementation details of the Coq translator.


Kabir Panahi

A Security Analysis of the Integration of Biometric Technology in the 2019 Afghan Presidential Election

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo

Abstract

Afghanistan deployed Biometric Voter Verification (BVV) technology nationally for the first time in the 2019 presidential election to address the systematic frauds in the prior elections. Through semi-structure interviews with 18 key national and international stakeholders who had an active role in this election, this study investigates the gap between intended outcomes of the BVV technology—focused on voter enfranchisement, fraud prevention, and public trust—and the reality on election day and beyond within the unique socio-political and technical landscape of Afghanistan.

Our findings reveal that while BVV technology initially promised a secure and transparent election, various technical and implementation challenges emerged, including threats for voters, staff, and officials. We found that the BVVs both supported and violated electoral goals: while they helped reduce fraud, they inadvertently disenfranchised some voters and caused delays that affected public trust. Technical limitations, usability issues, and administrative misalignments contributed to these outcomes. This study recommends critical lessons for future implementations of electoral technologies, emphasizing the importance of context-aware technological solutions and the need for robust administrative and technical frameworks to fully realize the potential benefits of election technology in fragile democracies.


Hara Madhav Talasila

Radiometric Calibration of Radar Depth Sounder Data Products

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Christopher Allen
James Stiles
Jilu Li
Leigh Stearns

Abstract

Although the Center for Remote Sensing of Ice Sheets (CReSIS) performs several radar calibration steps to produce Operation IceBridge (OIB) radar depth sounder data products, these datasets are not radiometrically calibrated and the swath array processing uses ideal (rather than measured [calibrated]) steering vectors. Any errors in the steering vectors, which describe the response of the radar as a function of arrival angle, will lead to errors in positioning and backscatter that subsequently affect estimates of basal conditions, ice thickness, and radar attenuation. Scientific applications that estimate physical characteristics of surface and subsurface targets from the backscatter are limited with the current data because it is not absolutely calibrated. Moreover, changes in instrument hardware and processing methods for OIB over the last decade affect the quality of inter-seasonal comparisons. Recent methods which interpret basal conditions and calculate radar attenuation using CReSIS OIB 2D radar depth sounder echograms are forced to use relative scattering power, rather than absolute methods.

As an active target calibration is not possible for past field seasons, a method that uses natural targets will be developed. Unsaturated natural target returns from smooth sea-ice leads or lakes are imaged in many datasets and have known scattering responses. The proposed method forms a system of linear equations with the recorded scattering signatures from these known targets, scattering signatures from crossing flight paths, and the radiometric correction terms. A least squares solution to optimize the radiometric correction terms is calculated, which minimizes the error function representing the mismatch in expected and measured scattering. The new correction terms will be used to correct the remaining mission data. The radar depth sounder data from all OIB campaigns can be reprocessed to produce absolutely calibrated echograms for the Arctic and Antarctic. A software simulator will be developed to study calibration errors and verify the calibration software. The software for processing natural targets and crossovers will be made available in CReSIS’s open-source polar radar software toolbox. The OIB data will be reprocessed with new calibration terms, providing to the data user community a complete set of radiometrically calibrated radar echograms for the CReSIS OIB radar depth sounder for the first time.


Daniel Herr

Information Theoretic Waveform Design with Application to Physically Realizable Adaptive-on-Transmit Radar

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Committee Members:

James Stiles, Chair
Christopher Allen
Carl Leuschen
Chris Depcik

Abstract

The fundamental task of a radar system is to utilize the electromagnetic spectrum to sense a scattering environment and generate some estimate from this measurement. This task can be posed as a Bayesian estimation problem of random parameters (the scattering environment) through an imperfect sensor (the radar system). From this viewpoint, metrics such as error covariance and estimator precision (or information) can be leveraged to evaluate and improve the performance of radar systems. Here, physically realizable radar waveforms are designed to maximize the Fisher information (FI) (specifically, a derivative of FI known as marginal Fisher information (MFI)) extracted from a scattering environment thereby minimizing the expected error covariance about an estimation parameter space. This information theoretic framework, along with the high-degree of design flexibility afforded by fully digital transmitter and receiver architectures, creates a high-dimensionality design space for optimizing radar performance.

First, the problem of joint-domain range-Doppler estimation utilizing a pulse-agile radar is posed from an estimation theoretic framework, and the minimum mean square error (MMSE) estimator is shown to suppress the range-sidelobe modulation (RSM) induced by pulse agility which may improve the signal-to-interference-plus-noise ratio (SINR) in signal-limited scenarios. A computationally efficient implementation of the range-Doppler MMSE estimator is developed as a series of range-profile estimation problems, under specific modeling and statistical assumptions. Next, a transformation of the estimation parameterization is introduced which ameliorates the high noise-gain typically associated with traditional MMSE estimation by sacrificing the super-resolution achieved by the MMSE estimator. Then, coordinate descent and gradient descent optimization methods are developed for designing MFI optimal waveforms with respect to either the original or transformed estimation space. These MFI optimal waveforms are extended to provide pulse-agility, which produces high-dimensionality radar emissions amenable to non-traditional receive processing techniques (such as MMSE estimation). Finally, informationally optimal waveform design and optimal estimation are extended into a cognitive radar concept capable of adaptive and dynamic sensing. The efficacy of the MFI waveform design and MMSE estimation are demonstrated via open-air hardware experimentation where their performance is compared against traditional techniques


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to distinct and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. Conversely, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar methodologies that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of the Space-Frequency Template Error (SFTE) is applied to a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on the SFTE for narrowband arrays. Finally, a suboptimal solution, based on alternating projections, is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars employing pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. The proposed waveforms and filters are implemented in hardware to demonstrate performance, validate robustness, and reflect real-world application to the degree possible with laboratory experimentation.


Anjana Lamsal

Self-homodyne Coherent Lidar System for Range and Velocity Detection

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui, Chair
Alessandro Salandrino
James Stiles


Abstract

Lidar systems are gaining popularity due to their benefits, including high resolution, precise accuracy and scalability. An FMCW lidar based on self-homodyne coherent detection technique is used for range and velocity measurement with a phase diverse coherent receiver. The system employs a self-homodyne detection technique, where a LO signal is derived directly from the same laser source as the transmitted signal and is the same linear chirp as the transmitted signal, thereby minimizing phase noise. A coherent receiver is employed to get in-phase and quadrature components of the photocurrent and to perform de-chirping. Since the LO has the same chirp as the transmitted signal, the mixing process in the photodiodes effectively cancels out the chirp or frequency modulation from the received signal. The spectrum of the de-chirped complex waveform is used to determine the range and velocity of the target. This lidar system simplifies the signal processing by using photodetectors for de-chirping. Additionally, after de-chirping, the resulting signal has a much narrower bandwidth compared to the original chirp signal and signal processing can be performed at lower frequencies.


Michael Neises

VERIAL: Verification-Enabled Runtime Integrity Attestation of Linux

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Drew Davidson
Cuncong Zhong
Matthew Moore
Michael Murray

Abstract

Runtime attestation is a way to gain confidence in the current state of a remote target. 
Layered attestation is a way of extending that confidence from one component to another. 
Introspective solutions for layered attestation require strict isolation. 
The seL4 is uniquely well-suited to offer kernel properties sufficient to achieve such isolation. 
I design, implement, and evaluate introspective measurements and the layered runtime attestation of a Linux kernel hosted by the seL4. 
VERIAL can detect diamorphine-style rootkits with performance cost comparable to previous work. 

Ibikunle Oluwanisola

Towards Generalizable Deep Learning Algorithms for Echogram Layer Tracking

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Shannon Blunt, Chair
Carl Leuschen
James Stiles
Christopher Depcik

Abstract

The accelerated melting of ice sheets in Greenland and Antarctica, driven by climate warming, is significantly contributing to global sea level rise. To better understand this phenomenon, airborne radars have been deployed to create echogram images that map snow accumulation patterns in these regions. Utilizing advanced radar systems developed by the Center for Remote Sensing and Integrated Systems (CReSIS), around 1.5 petabytes of climate data have been collected. However, extracting ice-related information, such as accumulation rates, remains limited due to the largely manual and time-consuming process of tracking internal layers in radar echograms. This highlights the need for automated solutions.

Machine learning and deep learning algorithms are well-suited for this task, given their near-human performance on optical images. The overlap between classical radar signal processing and machine learning techniques suggests that combining concepts from both fields could lead to optimized solutions.

In this work, we developed custom deep learning algorithms for automatic layer tracking (both supervised and self-supervised) to address the challenge of limited annotated data and achieve accurate tracking of radiostratigraphic layers in echograms. We introduce an iterative multi-class classification algorithm, termed “Row Block,” which sequentially tracks internal layers from the top to the bottom of an echogram based on the surface location. This approach was used in an active learning framework to expand the labeled dataset. We also developed deep learning segmentation algorithms by framing the echogram layer tracking problem as a binary segmentation task, followed by post-processing to generate vector-layer annotations using a connected-component 1-D layer-contour extractor.

Additionally, we aimed to provide the deep learning and scientific communities with a large, fully annotated dataset. This was achieved by synchronizing radar data with outputs from a regional climate model, creating what are currently the two largest machine-learning-ready Snow Radar datasets available, with 10,000 and 50,000 echograms, respectively.


Durga Venkata Suraj Tedla

AI DIETICIAN

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Jennifer Lohoefener


Abstract

The artificially intelligent Dietician Web application is an innovative piece of technology that makes use of artificial intelligence to offer individualised nutritional guidance and assistance. This web application uses advanced machine learning algorithms and natural language processing to provide users with individualized nutritional advice and assistance in meal planning. Users who are interested in improving their eating habits can benefit from this bot. The system collects relevant data about users' dietary choices, as well as information about calories, and provides insights into body mass index (BMI) and basal metabolic rate (BMR) through interactive conversations, resulting in tailored recommendations. To enhance its capacity for prediction, a number of classification methods, including naive Bayes, neural networks, random forests, and support vector machines, were utilised and evaluated. Following an exhaustive analysis, the model that proved to be the most effective random forest is selected for the purpose of incorporating it into the development of the artificial intelligence Dietician Web application. The purpose of this study is to emphasise the significance of the artificial intelligence Dietician Web application as a versatile and intelligent instrument that encourages the adoption of healthy eating habits and empowers users to make intelligent decisions regarding their dietary requirements.


Mohammed Atif Siddiqui

Understanding Soccer Through Data Science

When & Where:


Learned Hall, Room 2133

Committee Members:

Zijun Yao, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

Data science is revolutionizing the world of sports by uncovering hidden patterns and providing profound insights that enhance performance, strategy, and decision-making. This project, "Understanding Soccer Through Data Science," exemplifies the transformative power of data analytics in sports. By leveraging Graph Neural Networks (GNNs), this project delves deep into the intricate passing dynamics within soccer teams. 

A key innovation of this project is the development of a novel metric called PassNetScore, which aims to contextualize and provide meaningful insights into passing networks—a popular application of graph network theory in soccer. Utilizing the Statsbomb Event Data, which captures every event during a soccer match, including passes, shots, fouls, and substitutions, this project constructs detailed passing network graphs. Each player is represented as a node, and each pass as an edge, creating a comprehensive representation of team interactions on the pitch. The project harnesses the power of Spektral, a Python library for graph deep learning, to build and analyze these graphs. Key node features include players' average positions, total passes and expected threat of passes, while edges encapsulate the passing interactions and pass counts. 

The project explores two distinct models to calculate PassNetScore through predicting match outcomes. The first model is a basic GNN that employs a binary adjacency matrix to represent the presence or absence of passes between players. This model captures the fundamental structure of passing networks, highlighting key players and connections within the team. There are three variations of this model, each building on the binary model by adding new features to nodes or edges. The second model integrates GNN with Long Short-Term Memory (LSTM) networks to account for temporal dependencies in passing sequences. This advanced model provides deeper insights into how passing patterns evolve over time and how these dynamics impact match outcomes. To evaluate the effectiveness of these models, a suite of graph theory metrics is employed. These metrics illuminate the dynamics of team play and the influence of individual players, offering a comprehensive assessment of the PassNet Score metric. 

Through this innovative approach, the project demonstrates the powerful application of GNNs in sports analytics and offers a novel metric for evaluating passing networks based on match outcomes. This project paves the way for new strategies and insights that could revolutionize how teams analyze and improve their gameplay, showcasing the profound impact of data science in sports.