Defense Notices


All students and faculty are welcome to attend the final defense of EECS graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Abhishek Doodgaon

Photorealistic Synthetic Data Generation for Deep Learning-based Structural Health Monitoring of Concrete Dams

When & Where:


LEEP2, Room 1415A

Committee Members:

Zijun Yao, Chair
Caroline Bennett
Prasad Kulkarni
Remy Lequesne

Abstract

Regular inspections are crucial for identifying and assessing damage in concrete dams, including a wide range of damage states. Manual inspections of dams are often constrained by cost, time, safety, and inaccessibility. Automating dam inspections using artificial intelligence has the potential to improve the efficiency and accuracy of data analysis. Computer vision and deep learning models have proven effective in detecting a variety of damage features using images, but their success relies on the availability of high-quality and diverse training data. This is because supervised learning, a common machine-learning approach for classification problems, uses labeled examples, in which each training data point includes features (damage images) and a corresponding label (pixel annotation). Unfortunately, public datasets of annotated images of concrete dam surfaces are scarce and inconsistent in quality, quantity, and representation.

To address this challenge, we present a novel approach that involves synthesizing a realistic environment using a 3D model of a dam. By overlaying this model with synthetically created photorealistic damage textures, we can render images to generate large and realistic datasets with high-fidelity annotations. Our pipeline uses NX and Blender for 3D model generation and assembly, Substance 3D Designer and Substance Automation Toolkit for texture synthesis and automation, and Unreal Engine 5 for creating a realistic environment and rendering images. This generated synthetic data is then used to train deep learning models in the subsequent steps. The proposed approach offers several advantages. First, it allows generation of large quantities of data that are essential for training accurate deep learning models. Second, the texture synthesis ensures generation of high-fidelity ground truths (annotations) that are crucial for making accurate detections. Lastly, the automation capabilities of the software applications used in this process provides flexibility to generate data with varied textures elements, colors, lighting conditions, and image quality overcoming the constraints of time. Thus, the proposed approach can improve the automation of dam inspection by improving the quality and quantity of training data.


Sana Awan

Towards Robust and Privacy-preserving Federated Learning

When & Where:


Zoom Defense, please email jgrisafe@ku.edu for defense link.

Committee Members:

Fengjun Li, Chair
Alex Bardas
Cuncong Zhong
Mei Liu
Haiyang Chao

Abstract

Machine Learning (ML) has revolutionized various fields, from disease prediction to credit risk evaluation, by harnessing abundant data scattered across diverse sources. However, transporting data to a trusted server for centralized ML model training is not only costly but also raises privacy concerns, particularly with legislative standards like HIPAA in place. In response to these challenges, Federated Learning (FL) has emerged as a promising solution. FL involves training a collaborative model across a network of clients, each retaining its own private data. By conducting training locally on the participating clients, this approach eliminates the need to transfer entire training datasets while harnessing their computation capabilities. However, FL introduces unique privacy risks, security concerns, and robustness challenges. Firstly, FL is susceptible to malicious actors who may tamper with local data, manipulate the local training process, or intercept the shared model or gradients to implant backdoors that affect the robustness of the joint model. Secondly, due to the statistical and system heterogeneity within FL, substantial differences exist between the distribution of each local dataset and the global distribution, causing clients’ local objectives to deviate greatly from the global optima, resulting in a drift in local updates. Addressing such vulnerabilities and challenges is crucial before deploying FL systems in critical infrastructures.

In this dissertation, we present a multi-pronged approach to address the privacy, security, and robustness challenges in FL. This involves designing innovative privacy protection mechanisms and robust aggregation schemes to counter attacks during the training process. To address the privacy risk due to model or gradient interception, we present the design of a reliable and accountable blockchain-enabled privacy-preserving federated learning (PPFL) framework which leverages homomorphic encryption to protect individual client updates. The blockchain is adopted to support provenance of model updates during training so that malformed or malicious updates can be identified and traced back to the source. 

We studied the challenges in FL due to heterogeneous data distributions and found that existing FL algorithms often suffer from slow and unstable convergence and are vulnerable to poisoning attacks, particularly in extreme non-independent and identically distributed (non-IID) settings. We propose a robust aggregation scheme, named CONTRA, to mitigate data poisoning attacks and ensure an accuracy guarantee even under attack. This defense strategy identifies malicious clients by evaluating the cosine similarity of their gradient contributions and subsequently removes them from FL training. Finally, we introduce FL-GMM, an algorithm designed to tackle data heterogeneity while prioritizing privacy. It iteratively constructs a personalized classifier for each client while aligning local-global feature representations. By aligning local distributions with global semantic information, FL-GMM minimizes the impact of data diversity. Moreover, FL-GMM enhances security by transmitting derived model parameters via secure multiparty computation, thereby avoiding vulnerabilities to reconstruction attacks observed in other approaches. 


Past Defense Notices

Dates

MICHAEL HUGHES

Determination of Glacial Ice Temperature Profiles Using Radar and an Antenna Gain Estimation Technique

When & Where:


250 Nichols Hall

Committee Members:

Kenneth Demarest, Chair
Chris Allen
Carl Leuschen


Abstract


GARRIN KIMMELL

System Synthesis from a Monadic Functional Language

When & Where:


250 Nichols Hall

Committee Members:

Perry Alexander, Chair
Andy Gill
Craig Huneke
Gary Minden
Bill Harrison

Abstract


MUTHUKUMARAN PITCHAIMANI

Adaptive Cognitive Networks

When & Where:


246 Nichols Hall

Committee Members:

Joseph Evans, Chair
Gunes Ercal-Ozkaya
Christine Jensen Sundstrom
Victor Frost
Prasad Kulkarni

Abstract


DANIEL FOKUM

Optimal Communications Systems and Network Design for Cargo Monitoring

When & Where:


250 Nichols Hall

Committee Members:

Victor Frost, Chair
Joseph Evans
Tyrone Duncan
Gary Minden
David Petr

Abstract


SANDHYA BELDONA (GABBUR)

Reputation based Buyer Strategies for Seller Selection in Electronic Markets

When & Where:


246 Nichols Hall

Committee Members:

Arvin Agah, Chair
Costas Tsatsoulis
Gunes Ercal-Ozkaya
Prakash Shenoy
Prasad Kulkarni

Abstract


JOHN LEDFORD

Development of an 8 Channel Waveform Generator for Beam-forming Applications

When & Where:


317 Nichols Hall

Committee Members:

Chris Allen, Chair
Carl Leuschen
Sarah Seguin


Abstract


SUPRIYA VASUDEVAN

Handling Missing Attribute Values in Decision Tables Using Valued Tolerance Approach

When & Where:


2001B Eaton Hall

Committee Members:

Jerzy Grzymala-Busse, Chair
Jun Huan
Prasad Kulkarni


Abstract


AARON SMALTER

Kernel Functions for Graph Classification

When & Where:


250 Nichols Hall

Committee Members:

Jun Huan, Chair
Xue-Wen
Gerald Lushington
Mahesh Visvanathan

Abstract


DAVID JOHNSON

Human Robot Interaction Through Semantic Integration of Multiple Modalities, Dialog Management, and Contexts

When & Where:


250 Nichols Hall

Committee Members:

Arvin Agah, Chair
Swapan Chakrabarti
Xue-Wen Chen
Brian Potetz
Sara Wilson

Abstract


ANDREW KANNENBERG

A Streamlined, Cost Effective Database Approach to Managing Requirements Traceability

When & Where:


155 Regnier Hall

Committee Members:

Hossein Saiedian, Chair
Gunes Ercal-Ozkaya
Prasad Kulkarni


Abstract