Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Zhaohui Wang

Enhancing Security and Privacy of IoT Systems: Uncovering and Resolving Cross-App Threats

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to our daily lives, enabling users to customize automation rules and develop IoT apps to meet their specific needs. However, as IoT devices interact with multiple apps across various platforms, users are exposed to complex security and privacy risks. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats.

In this work, we introduce two innovative approaches to uncover and address these concealed threats in IoT environments. The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app chains that are formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate the risks based on inferences. Additionally, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks. To systematically detect cross-app interference threats, we propose to apply principles of logical fallacies to formalize conflicts in rule interactions. We identify and categorize cross-app interference by examining relations between events in IoT apps. We define new risk metrics for evaluating the severity of these interferences and use optimization techniques to resolve interference threats efficiently. This approach ensures comprehensive coverage of cross-app interference, offering a systematic solution compared to the ad hoc methods used in previous research.

To enhance forensic capabilities within IoT, we integrate blockchain technology to create a secure, immutable framework for digital forensics. This framework enables the identification, tracing, storage, and analysis of forensic information to detect anomalous behavior. Furthermore, we developed a large-scale, manually verified, comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. Each of these approaches has been evaluated using our dataset of real-world apps, collectively offering valuable insights and tools for enhancing IoT security and privacy against cross-app threats.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

 In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our algorithm using the multidimensional Poisson equation as a case study. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. We will also focus on extending these techniques to PDEs relevant to computational fluid dynamics and financial modeling, further bridging the gap between theoretical quantum algorithms and practical applications.


Venkata Sai Krishna Chaitanya Addepalli

A Comprehensive Approach to Facial Emotion Recognition: Integrating Established Techniques with a Tailored Model

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Facial emotion recognition has become a pivotal application of machine learning, enabling advancements in human-computer interaction, behavioral analysis, and mental health monitoring. Despite its potential, challenges such as data imbalance, variation in expressions, and noisy datasets often hinder accurate prediction.

 This project presents a novel approach to facial emotion recognition by integrating established techniques like data augmentation and regularization with a tailored convolutional neural network (CNN) architecture. Using the FER2013 dataset, the study explores the impact of incremental architectural improvements, optimized hyperparameters, and dropout layers to enhance model performance.

 The proposed model effectively addresses issues related to data imbalance and overfitting while achieving enhanced accuracy and precision in emotion classification. The study underscores the importance of feature extraction through convolutional layers and optimized fully connected networks for efficient emotion recognition. The results demonstrate improvements in generalization, setting a foundation for future real-time applications in diverse fields. 


Ye Wang

Deceptive Signals: Unveiling and Countering Sensor Spoofing Attacks on Cyber Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Committee Members:

Fengjun Li, Chair
Drew Davidson
Rongqing Hui
Bo Luo
Haiyang Chao

Abstract

In modern computer systems, sensors play a critical role in enabling a wide range of functionalities, from navigation in autonomous vehicles to environmental monitoring in smart homes. Acting as an interface between physical and digital worlds, sensors collect data to drive automated functionalities and decision-making. However, this reliance on sensor data introduces significant potential vulnerabilities, leading to various physical, sensor-enabled attacks such as spoofing, tampering, and signal injection. Sensor spoofing attacks, where adversaries manipulate sensor input or inject false data into target systems, pose serious risks to system security and privacy.

In this work, we have developed two novel sensor spoofing attack methods that significantly enhance both efficacy and practicality. The first method employs physical signals that are imperceptible to humans but detectable by sensors. Specifically, we target deep learning based facial recognition systems using infrared lasers. By leveraging advanced laser modeling, simulation-guided targeting, and real-time physical adjustments, our infrared laser-based physical adversarial attack achieves high success rates with practical real-time guarantees, surpassing the limitations of prior physical perturbation attacks. The second method embeds physical signals, which are inherently present in the system, into legitimate patterns. In particular, we integrate trigger signals into standard operational patterns of actuators on mobile devices to construct remote logic bombs, which are shown to be able to evade all existing detection mechanisms. Achieving a zero false-trigger rate with high success rates, this novel sensor bomb is highly effective and stealthy.

Our study on emerging sensor-based threats highlights the urgent need for comprehensive defenses against sensor spoofing. Along this direction, we design and investigate two defense strategies to mitigate these threats. The first strategy involves filtering out physical signals identified as potential attack vectors. The second strategy is to leverage beneficial physical signals to obfuscate malicious patterns and reinforce data integrity. For example, side channels targeting the same sensor can be used to introduce cover signals that prevent information leakage, while environment-based physical signals serve as signatures to authenticate data. Together, these strategies form a comprehensive defense framework that filters harmful sensor signals and utilizes beneficial ones, significantly enhancing the overall security of cyber systems.


SM Ishraq-Ul Islam

Quantum Circuit Synthesis using Genetic Algorithms Combined with Fuzzy Logic

When & Where:


LEEP2, Room 1420

Committee Members:

Esam El-Araby, Chair
Tamzidul Hoque
Prasad Kulkarni


Abstract

  Quantum computing emerges as a promising direction for high-performance computing in the post-Moore era. Leveraging quantum mechanical properties, quantum devices can theoretically provide significant speedup over classical computers in certain problem domains. Quantum algorithms are typically expressed as quantum circuits composed of quantum gates, or as unitary matrices. Execution of quantum algorithms on physical devices requires translation to machine-compatible circuits -- a process referred to as quantum compilation or synthesis. 

    Quantum synthesis is a challenging problem. Physical quantum devices support a limited number of native basis gates, requiring synthesized circuits to be composed of only these gates. Moreover, quantum devices typically have specific qubit topologies, which constrain how and where gates can be applied. Consequently, logical qubits in input circuits and unitaries may need to be mapped to and routed between physical qubits on the device.

    Current Noisy Intermediate-Scale Quantum (NISQ) devices present additional constraints, through their gate errors and high susceptibility to noise. NISQ devices are vulnerable to errors during gate application and their short decoherence times leads to qubits rapidly succumbing to accumulated noise and possibly corrupting computations. Therefore, circuits synthesized for NISQ devices need to have a low number of gates to reduce gate errors, and short execution times to avoid qubit decoherence. 

   The problem of synthesizing device-compatible quantum circuits, while optimizing for low gate count and short execution times, can be shown to be computationally intractable using analytical methods. Therefore, interest has grown towards heuristics-based compilation techniques, which are able to produce approximations of the desired algorithm to a required degree of precision. In this work, we investigate using Genetic Algorithms (GAs) -- a proven gradient-free optimization technique based on natural selection -- for circuit synthesis. In particular, we formulate the quantum synthesis problem as a multi-objective optimization (MOO) problem, with the objectives of minimizing the approximation error, number of multi-qubit gates, and circuit depth. We also employ fuzzy logic for runtime parameter adaptation of GA to enhance search efficiency and solution quality of our proposed quantum synthesis method.


Sravan Reddy Chintareddy

Combating Spectrum Crunch with Efficient Machine-Learning Based Spectrum Access and Harnessing High-frequency Bands for Next-G Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Erik Perrins
Dongjie Wang
Shawn Keshmiri

Abstract

There is an increasing trend in the number of wireless devices that is now already over 14 billion and is expected to grow to 40 billion devices by 2030. In addition, we are witnessing an unprecedented proliferation of applications and technologies with wireless connectivity requirements such as unmanned aerial vehicles, connected health, and radars for autonomous vehicles. The advent of new wireless technologies and devices will only worsen the current spectrum crunch that service providers and wireless operators are already experiencing. In this PhD study, we address these challenges through the following research thrusts, in which we consider two emerging applications aimed at advancing spectrum efficiency and high-frequency connectivity solutions.

 

First, we focus on effectively utilizing the existing spectrum resources for emerging applications such as networked UAVs operating within the Unmanned Traffic Management (UTM) system. In this thrust, we develop a coexistence framework for UAVs to share spectrum with traditional cellular networks by using machine learning (ML) techniques so that networked UAVs act as secondary users without interfering with primary users. We propose federated learning (FL) and reinforcement learning (RL) solutions to establish a collaborative spectrum sensing and dynamic spectrum allocation framework for networked UAVs. In the second part, we explore the potential of millimeter-wave (mmWave) and terahertz (THz) frequency bands for high-speed data transmission in urban settings. Specifically, we investigate THz-based midhaul links for 5G networks, where a network's central units (CUs) connect to distributed units (DUs). Through numerical analysis, we assess the feasibility of using 140 GHz links and demonstrate the merits of high-frequency bands to support high data rates in midhaul networks for future urban communications infrastructure. Overall, this research is aimed at establishing frameworks and methodologies that contribute toward the sustainable growth and evolution of wireless connectivity.


Arnab Mukherjee

Attention-Based Solutions for Occlusion Challenges in Person Tracking

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Sumaiya Shomaji
Hongyang Sun
Jian Li

Abstract

Person tracking and association is a complex task in computer vision applications. Even with a powerful detector, a highly accurate association algorithm is necessary to match and track the correct person across all frames. This method has numerous applications in surveillance, and its complexity increases with the number of detected objects and their movements across frames. A significant challenge in person tracking is occlusion, which occurs when an individual being tracked is partially or fully blocked by another object or person. This can make it difficult for the tracking system to maintain the identity of the individual and track them effectively.

In this research, we propose a solution to the occlusion problem by utilizing an occlusion-aware spatial attention transformer. We have divided the entire tracking association process into two scenarios: occlusion and no-occlusion. When a detected person with a specific ID suddenly disappears from a frame for a certain period, we employ advanced methods such as Detector Integration and Pose Estimation to ensure the correct association. Additionally, we implement a spatial attention transformer to differentiate these occluded detections, transform them, and then match them with the correct individual using the Cosine Similarity Metric.

The features extracted from the attention transformer provide a robust baseline for detecting people, enhancing the algorithms adaptability and addressing key challenges associated with existing approaches. This improved method reduces the number of misidentifications and instances of ID switching while also enhancing tracking accuracy and precision.


Agraj Magotra

Data-Driven Insights into Sustainability: An Artificial Intelligence (AI) Powered Analysis of ESG Practices in the Textile and Apparel Industry

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Prasad Kulkarni
Zijun Yao


Abstract

The global textile and apparel (T&A) industry is under growing scrutiny for its substantial environmental and social impact, producing 92 million tons of waste annually and contributing to 20% of global water pollution. In Bangladesh, one of the world's largest apparel exporters, the integration of Environmental, Social, and Governance (ESG) practices is critical to meet international sustainability standards and maintain global competitiveness. This master's study leverages Artificial Intelligence (AI) and Machine Learning (ML) methodologies to comprehensively analyze unstructured corporate data related to ESG practices among LEED-certified Bangladeshi T&A factories. 

Our study employs advanced techniques, including Web Scraping, Natural Language Processing (NLP), and Topic Modeling, to extract and analyze sustainability-related information from factory websites. We develop a robust ML framework that utilizes Non-Negative Matrix Factorization (NMF) for topic extraction and a Random Forest classifier for ESG category prediction, achieving an 86% classification accuracy. The study uncovers four key ESG themes: Environmental Sustainability, Social : Workplace Safety and Compliance, Social: Education and Community Programs, and Governance. The analysis reveals that 46% of factories prioritize environmental initiatives, such as energy conservation and waste management, while 44% emphasize social aspects, including workplace safety and education. Governance practices are significantly underrepresented, with only 10% of companies addressing ethical governance, healthcare provisions and employee welfare.

To deepen our understanding of the ESG themes, we conducted a Centrality Analysis to identify the most influential keywords within each category, using measures such as degree, closeness, and eigenvector centrality. Furthermore, our analysis reveals that higher certification levels, like Platinum, are associated with a more balanced emphasis on environmental, social, and governance practices, while lower levels focus primarily on environmental efforts. These insights highlight key areas where the industry can improve and inform targeted strategies for enhancing ESG practices. Overall, this ML framework provides a data-driven, scalable approach for analyzing unstructured corporate data and promoting sustainability in Bangladesh’s T&A sector, offering actionable recommendations for industry stakeholders, policymakers, and global brands committed to responsible sourcing.


Samyoga Bhattarai

‘Pro-ID: A Secure Face Recognition System using Locality Sensitive Hashing to Protect Human ID’

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

Face recognition systems are widely used in various applications, from mobile banking apps to personal smartphones. However, these systems often store biometric templates in raw form, posing significant security and privacy risks. Pro-ID addresses this vulnerability by incorporating SimHash, an algorithm of Locality Sensitive Hashing (LSH), to create secure and irreversible hash codes of facial feature vectors. Unlike traditional methods that leave raw data exposed to potential breaches, SimHash transforms the feature space into high-dimensional hash codes, safeguarding user identity while preserving system functionality. 

The proposed system creates a balance between two aspects: security and the system’s performance. Additionally, the system is designed to resist common attacks, including brute force and template inversion, ensuring that even if the hashed templates are exposed, the original biometric data cannot be reconstructed.  

A key challenge addressed in this project is minimizing the trade-off between security and performance. Extensive evaluations demonstrate that the proposed method maintains competitive accuracy rates comparable to traditional face recognition systems while significantly enhancing security metrics such as irreversibility, unlinkability, and revocability. This innovative approach contributes to advancing the reliability and trustworthiness of biometric systems, providing a secure framework for applications in face recognition systems. 


Shalmoli Ghosh

High-Power Fabry-Perot Quantum-Well Laser Diodes for Application in Multi-Channel Coherent Optical Communication Systems

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Rongqing Hui , Chair
Shannon Blunt
Jim Stiles


Abstract

Wavelength Division Multiplexing (WDM) is essential for managing rapid network traffic growth in fiber optic systems. Each WDM channel demands a narrow-linewidth, frequency-stabilized laser diode, leading to complexity and increased energy consumption. Multi-wavelength laser sources, generating optical frequency combs (OFC), offer an attractive solution, enabling a single laser diode to provide numerous equally spaced spectral lines for enhanced bandwidth efficiency.

Quantum-dot and quantum-dash OFCs provide phase-synchronized lines with low relative intensity noise (RIN), while Quantum Well (QW) OFCs offer higher power efficiency, but they have higher RIN in the low frequency region of up to 2 GHz. However, both quantum-dot/dash and QW based OFCs, individual spectral lines exhibit high phase noise, limiting coherent detection. Output power levels of these OFCs range between 1-20 mW where the power of each spectral line is typically less than -5 dBm. Due to this requirement, these OFCs require excessive optical amplification, also they possess relatively broad spectral linewidths of each spectral line, due to the inverse relationship between optical power and linewidth as per the Schawlow-Townes formula. This constraint hampers their applicability in coherent detection systems, highlighting a challenge for achieving high-performance optical communication.

In this work, coherent system application of a single-section Quantum-Well Fabry-Perot (FP) laser diode is demonstrated. This laser delivers over 120 mW optical power at the fiber pigtail with a mode spacing of 36.14 GHz. In an experimental setup, 20 spectral lines from a single laser transmitter carry 30 GBaud 16-QAM signals over 78.3 km single-mode fiber, achieving significant data transmission rates. With the potential to support a transmission capacity of 2.15 Tb/s (4.3 Tb/s for dual polarization) per transmitter, including Forward Error Correction (FEC) and maintenance overhead, it offers a promising solution for meeting the escalating demands of modern network traffic efficiently.


Anissa Khan

Privacy Preserving Biometric Matching

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Perry Alexander, Chair
Prasad Kulkarni
Fengjun Li


Abstract

Biometric matching is a process by which distinct features are used to identify an individual. Doing so privately is important because biometric data, such as fingerprints or facial features, is not something that can be easily changed or updated if put at risk. In this study, we perform a piece of the biometric matching process in a privacy preserving manner by using secure multiparty computation (SMPC). Using SMPC allows the identifying biological data, called a template, to remain stored by the data owner during the matching process. This provides security guarantees to the biological data while it is in use and therefore reduces the chances the data is stolen. In this study, we find that performing biometric matching using SMPC is just as accurate as performing the same match in plaintext.

 


Bryan Richlinski

Prioritize Program Diversity: Enumerative Synthesis with Entropy Ordering

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Sankha Guria, Chair
Perry Alexander
Drew Davidson
Jennifer Lohoefener

Abstract

Program synthesis is a popular way to create a correct-by-construction program from a user-provided specification. Term enumeration is a leading technique to systematically explore the space of programs by generating terms from a formal grammar. These terms are treated as candidate programs which are tested/verified against the specification for correctness. In order to prioritize candidates more likely to satisfy the specification, enumeration is often ordered by program size or other domain-specific heuristics. However, domain-specific heuristics require expert knowledge, and enumeration by size often leads to terms comprised of frequently repeating symbols that are less likely to satisfy a specification. In this thesis, we build a heuristic that prioritizes term enumeration based on variability of individual symbols in the program, i.e., information entropy of the program. We use this heuristic to order programs in both top-down and bottom-up enumeration. We evaluated our work on a subset of the PBE-String track of the 2017 SyGuS competition benchmarks and compared against size-based enumeration. In top-down enumeration, our entropy heuristic shortens runtime in ~56% of cases and tests fewer programs in ~80% before finding a valid solution. For bottom-up enumeration, our entropy heuristic improves the number of enumerated programs in ~30% of cases before finding a valid solution, without improving the runtime. Our findings suggest that using entropy to prioritize program enumeration is a promising step forward for faster program synthesis.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

 

This research provides a deep dive into the npm-centric software supply chain, exploring various facets and phenomena that impact the security of this software supply chain. Such factors include (i) hidden code clones--which obscure provenance and can stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts open-source development practices, and (v) package compromise via malicious updates. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Jagadeesh Sai Dokku

Intelligent Chat Bot for KU Website: Automated Query Response and Resource Navigation

When & Where:


Eaton Hall, Room 2001B

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

This project introduces an intelligent chatbot designed to improve user experience on our university website by providing instant, automated responses to common inquiries. Navigating a university website can be challenging for students, applicants, and visitors who seek quick information about admissions, campus services, events, and more. To address this challenge, we developed a chatbot that simulates human conversation using Natural Language Processing (NLP), allowing users to find information more efficiently. The chatbot is powered by a Bidirectional Long Short-Term Memory (BiLSTM) model, an architecture well-suited for understanding complex sentence structures. This model captures contextual information from both directions in a sentence, enabling it to identify user intent with high accuracy. We trained the chatbot on a dataset of intent-labeled queries, enabling it to recognize specific intentions such as asking about campus facilities, academic programs, or event schedules. The NLP pipeline includes steps like tokenization, lemmatization, and vectorization. Tokenization and lemmatization prepare the text by breaking it into manageable units and standardizing word forms, making it easier for the model to recognize similar word patterns. The vectorization process then translates this processed text into numerical data that the model can interpret. Flask is used to manage the backend, allowing seamless communication between the user interface and the BiLSTM model. When a user submits a query, Flask routes the input to the model, processes the prediction, and delivers the appropriate response back to the user interface. This chatbot demonstrates a successful application of NLP in creating interactive, efficient, and user-friendly solutions. By automating responses, it reduces reliance on manual support and ensures users can access relevant information at any time. This project highlights how intelligent chatbots can transform the way users interact with university websites, offering a faster and more engaging experience.

 


Anahita Memar

Optimizing Protein Particle Classification: A Study on Smoothing Techniques and Model Performance

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Prasad Kulkarni, Chair
Hossein Saiedian
Prajna Dhar


Abstract

This thesis investigates the impact of smoothing techniques on enhancing classification accuracy in protein particle datasets, focusing on both binary and multi-class configurations across three datasets. By applying methods including Averaging-Based Smoothing, Moving Average, Exponential Smoothing, Savitzky-Golay, and Kalman Smoothing, we sought to improve performance in Random Forest, Decision Tree, and Neural Network models. Initial baseline accuracies revealed the complexity of multi-class separability, while clustering analyses provided valuable insights into class similarities and distinctions, guiding our interpretation of classification challenges.

These results indicate that Averaging-Based Smoothing and Moving Average techniques are particularly effective in enhancing classification accuracy, especially in configurations with marked differences in surfactant conditions. Feature importance analysis identified critical metrics, such as IntMean and IntMax, which played a significant role in distinguishing classes. Cross-validation validated the robustness of our models, with Random Forest and Neural Network consistently outperforming others in binary tasks and showing promising adaptability in multi-class classification. This study not only highlights the efficacy of smoothing techniques for improving classification in protein particle analysis but also offers a foundational approach for future research in biopharmaceutical data processing and analysis.


Past Defense Notices

Dates

Abdul Baseer Mohammed

Enhancing Parameter-Efficient Fine-Tuning of Large Language Models with Alignment Adapters and LoRA

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni


Abstract

Large Language Models (LLMs) have become integral to natural language processing, involving initial broad pretraining on generic data followed by fine-tuning for specific tasks or domains. While advancements in Parameter Efficient Fine-Tuning (PEFT) techniques have made strides in reducing resource demands for LLM fine-tuning, they possess individual constraints. This project addresses the challenges posed by PEFT in the context of transformers architecture for sequence-to-sequence tasks, by integrating two pivotal techniques: Low-Rank Adaptation (LoRA) for computational efficiency and adaptive layers for task-specific customization. To overcome the limitations of LoRA, we introduce a simple yet effective hyper alignment adapter, that leverages a hypernetwork to generate decoder inputs based on encoder outputs, thereby serving as a crucial bridge to improve alignment between the encoder and the decoder. This fusion strikes a balance between the fine-tuning complexity and task performance, mitigating the individual drawbacks while improving the encoder-decoder alignment. As a result, we achieve more precise and contextually relevant sequence generation. The proposed solution improves the overall efficiency and effectiveness of LLMs in sequence-to-sequence tasks, leading to better alignment and more accurate output generation.


Laurynas Lialys

Engineering Laser Beams for Particle Trapping, Lattice Formation and Microscopy

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Shima Fardad, Chair
Morteza Hashemi
Rongqing Hui
Alessandro Salandrino
Xinmai Yang

Abstract

Having control over nano- and micro-sized objects' position inside a suspension is crucial in many applications such as: trapping and manipulating microscopic objects, sorting particles and living microorganisms, and building microscopic size 3D crystal structures and lattices. This control can be achieved by judiciously engineering optical forces and light-matter interactions inside colloidal suspensions that result in optical trapping. However, in the current techniques, to confine and transport particles in 3D, the use of high NA (Numerical Aperture) optics is a must. This in turn leads to several disadvantages such as alignment complications, narrow field of view, low stability values, and undesirable thermal effects. Hence, here we study a novel optical trapping method that we named asymmetric counter-propagating beams where optical forces are engineered to overcome the aforementioned limitations of existing methods. This novel system is significantly easier to align due to its utilization of much lower NA optics in combination with engineered beams which create a very flexible manipulating system. This new approach allows the trapping and manipulation of different shape objects, sizing from tens of nanometers to hundreds of micrometers by exploiting asymmetrical optical fields with high stability. In addition, this technique also allows for significantly larger particle trapping volumes. As a result, we can apply this method to trapping much larger particles and microorganisms that have never been trapped optically before as well as building 3D lattices and crystal structures of microscopic-sized particles. Finally, this novel approach allows for the integration of a variety of spectroscopy and microscopy techniques, such as light-sheet fluorescence microscopy, to extract time-sensitive information and acquire images with detailed features from trapped entities.


Elise McEllhiney

Self-Training Autonomous Driving System Using An Advantage-Actor-Critic Model

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Victor Frost, Chair
Prasad Kulkarni
Bo Luo


Abstract

We describe an autonomous driving system that uses reinforcement learning to train a car to drive without the need for collecting training input from human drivers.  We achieve this by using the Advantage Actor Critic reinforcement system that trains the car based on continuously adapting the model to minimize the penalty received by the car.  This penalty is determined if the car intersected the borders of the track on which it is driving.  We show the resilience of the proposed autonomously trained system to noisy sensor inputs and variations in the shape of the track.


Shravan Kaundinya

Design, development, and calibration of a high-power UHF radar with a large multichannel antenna array

When & Where:


Nichols Hall, Room 317 (Richard K. Moore Conference Room)

Committee Members:

Carl Leuschen, Chair
Chris Allen
John Paden
James Stiles
Richard Hale

Abstract

The Center for Oldest Ice Exploration (COLDEX) is an NSF-funded multi-institution collaboration to explore Antarctica for the oldest possible continuous ice record. It comprises of exploration and modelling teams that are using instruments like radars, lidars, gravimeters, and magnetometers to select candidate locations to collect a continuous 1.5-million-year ice core. To assist in this search for old ice, the Center for Remote Sensing and Integrated Systems (CReSIS) at the University of Kansas developed a new airborne higher-power version of the 600-900 MHz Accumulation Radar with a much larger multichannel cross-track antenna array. The fuselage portion of the antenna array is a 64-element 0.9 m by 3.8 m array with 4 elements in along-track and 16 elements in cross-track. Each element is a dual-polarized microstrip antenna and each column of 4 elements is power combined into a single channel resulting in 16 cross-track channels. Power is transmitted across 4 cross-track channels on either side of the fuselage array alternatingly to produce a total peak power of 6.4 kW (before losses). Three additional antennas are integrated on each wing to lengthen the antenna aperture. A novel receiver concept is developed using limiters to compress the dynamic range to simultaneously capture the strong ice surface and weak ice bottom returns. This system was flown on a Basler aircraft at the South Pole during the 2022-2023 Austral Summer season and will be flown again during the upcoming 2023-2024 season for repeat interferometry. This work describes the current radar system design and proposes to develop improvements to the compact, high-power divider and large multichannel polarimetric array used by the radar. It then proposes to develop and implement a system engineering perspective on the calibration of this multi-pass imaging radar.


Bahozhoni White

Alternative “Bases” for Gradient Based Optimization of Parameterized FM Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles

Abstract

Even for a fixed time-bandwidth product there are infinite possible spectrally-shaped random FM (RFM) waveforms one could generate due to their being phase-continuous. Moreover, certain RFM classes rely on an imposed basis-like structure scaled by underlying parameters that can be optimized (e.g. gradient descent and greedy search have been demonstrated). Because these structures must include oversampling with respect to 3-dB bandwidth to account for sufficient spectral roll-off (necessary to be physically realizable in hardware), they are not true bases (i.e. not square). Therefore, any individual structure cannot represent all possible waveforms, with the waveforms generated by a given structure tending to possess similar attributes. Unless of course we consider over-coded polyphaser-coded FM (PCFM), which increases the number of elements in the parameter vector, while maintaining the relationship between waveform samples and the time-bandwidth product. Which presents the potential for a true bases, if there is a constraint either explicit or implicit that will constrain the spectrum. Here we examine waveforms possessing different attributes, as well as the potential for a true basis which may inform their selection for given radar applications.


Michael Talaga

A Computer Vision Application for Vehicle Collision and Damage Detection

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hongyang Sun, Chair
David Johnson, Co-Chair
Zijun Yao


Abstract

During the car insurance claims process after an accident has occurred, a vehicle must be assessed by a claims adjuster manually. This process will take time and often results in inaccuracies between what a customer is paid and what the damages actually cost. Separately, companies like KBB and Carfax rely on previous claims records or untrustworthy user input to determine a car’s damage and valuation. Part of this process can be automated to determine where exterior vehicle damage exists on a vehicle. 

In this project, a deep-learning approach is taken using the MaskR-CNN model to train on a dataset for instance segmentation. The model can then outline and label instances on images where vehicles have dents, scratches, cracks, broken glass, broken lamps, and flat tires. The results have shown that broken glass, flat tires, and broken lamps are much easier to locate than the remaining categories, which tend to be smaller in size. These predictions have an end goal of being used as an input for damage cost prediction. 


Michael Talaga

A Computer Vision Application for Vehicle Collision and Damage Detection

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hongyang Sun, Chair

Zijun Yao


Abstract

During the car insurance claims process after an accident has occurred, a vehicle must be assessed by a claims adjuster manually. This process will take time and often results in inaccuracies between what a customer is paid and what the damages actually cost. Separately, companies like KBB and Carfax rely on previous claims records or untrustworthy user input to determine a car’s damage and valuation. Part of this process can be automated to determine where exterior vehicle damage exists on a vehicle. 

In this project, a deep-learning approach is taken using the MaskR-CNN model to train on a dataset for instance segmentation. The model can then outline and label instances on images where vehicles have dents, scratches, cracks, broken glass, broken lamps, and flat tires. The results have shown that broken glass, flat tires, and broken lamps are much easier to locate than the remaining categories, which tend to be smaller in size. These predictions have an end goal of being used as an input for damage cost prediction. 


Michael Talaga

A Computer Vision Application for Vehicle Collision and Damage Detection

When & Where:


Zoom Meeting, please email jgrisafe@ku.edu for defense link.

Committee Members:

Hongyang Sun, Chair
David Johnson (Co-Chair)
Zijun Yao


Abstract

During the car insurance claims process after an accident has occurred, a vehicle must be assessed by a claims adjuster manually. This process will take time and often results in inaccuracies between what a customer is paid and what the damages actually cost. Separately, companies like KBB and Carfax rely on previous claims records or untrustworthy user input to determine a car’s damage and valuation. Part of this process can be automated to determine where exterior vehicle damage exists on a vehicle. 

In this project, a deep-learning approach is taken using the MaskR-CNN model to train on a dataset for instance segmentation. The model can then outline and label instances on images where vehicles have dents, scratches, cracks, broken glass, broken lamps, and flat tires. The results have shown that broken glass, flat tires, and broken lamps are much easier to locate than the remaining categories, which tend to be smaller in size. These predictions have an end goal of being used as an input for damage cost prediction. 


Alice Chen

Dynamic Selective Protection for Sparse Iterative Solvers

When & Where:


Eaton Hall, Room 2001B

Committee Members:

Hongyang Sun, Chair
Sumaiya Shomaji
Suzanne Shontz


Abstract

Soft errors are frequent occurrences within extensive computing platforms, primarily attributed to the growing size and intricacy of high-performance computing (HPC) systems. To safeguard scientific applications against such errors, diverse resilience approaches have been introduced, encompassing techniques like checkpointing, Algorithm-Based Fault Tolerance (ABFT), and replication, each operating at distinct tiers of defense. Notably, system-level replication often necessitates the duplication or triplication of the entire computational process, yielding substantial resilience-associated costs. This project introduces a method for dynamic selective safeguarding of sparse iterative solvers, with a focus on the Preconditioned Conjugate Gradient (PCG) solver, aiming to mitigate system level resilience overhead. For this method, we leverage machine learning (ML) to predict the impact of soft errors that strike different elements of a key computation (i.e., sparse matrix-vector multiplication) at different iterations of the solver. Based on the result of the prediction, we design a dynamic strategy to selectively protect those elements that would result in a large performance degradation if struck by soft errors. Experimental assessment validates the efficacy of our dynamic protection strategy in curbing resilience overhead in contrast to prevailing algorithms.


Grace Young

A Quantum Polynomial-Time Reduction for the Dihedral Hidden Subgroup Problem

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Committee Members:

Perry Alexander, Chair
Esam El-Araby
Matthew Moore
Cuncong Zhong
KC Kong

Abstract

The last century has seen incredible growth in the field of quantum computing. Quantum computation offers the opportunity to find efficient solutions to certain computational problems which are intractable on classical computers. One class of problems that seems to benefit from quantum computing is the Hidden Subgroup Problem (HSP). The HSP includes, as special cases, the problems of integer factoring, discrete logarithm, shortest vector, and subset sum - making the HSP incredibly important in various fields of research.                               

The presented research examines the HSP for Dihedral groups with order 2^n and proves a quantum polynomial-time reduction to the so-called Codomain Fiber Intersection Problem (CFIP). The usual approach to the HSP relies on harmonic analysis in the domain of the problem and the best-known algorithm using this approach is sub-exponential, but still super-polynomial. The algorithm we will present deviates from the usual approach by focusing on the structure encoded in the codomain and uses this structure to direct a “walk” down the subgroup lattice terminating at the hidden subgroup.                               

Though the algorithm presented here is specifically designed for the DHSP, it has potential applications to many other types of the HSP. It is hypothesized that any group with a sufficiently structured subgroup lattice could benefit from the analysis developed here. As this approach diverges from the standard approach to the HSP it could be a promising step in finding an efficient solution to this problem.