Towards Robust and Privacy-preserving Federated Learning


Student Name: Sana Awan
Defense Date:
Location: Zoom Defense, please email jgrisafe@ku.edu for defense link.
Chair: Fengjun Li

Alex Bardas

Cuncong Zhong

Mei Liu

Haiyang Chao

Abstract:

Machine Learning (ML) has revolutionized various fields, from disease prediction to credit risk evaluation, by harnessing abundant data scattered across diverse sources. However, transporting data to a trusted server for centralized ML model training is not only costly but also raises privacy concerns, particularly with legislative standards like HIPAA in place. In response to these challenges, Federated Learning (FL) has emerged as a promising solution. FL involves training a collaborative model across a network of clients, each retaining its own private data. By conducting training locally on the participating clients, this approach eliminates the need to transfer entire training datasets while harnessing their computation capabilities. However, FL introduces unique privacy risks, security concerns, and robustness challenges. Firstly, FL is susceptible to malicious actors who may tamper with local data, manipulate the local training process, or intercept the shared model or gradients to implant backdoors that affect the robustness of the joint model. Secondly, due to the statistical and system heterogeneity within FL, substantial differences exist between the distribution of each local dataset and the global distribution, causing clients’ local objectives to deviate greatly from the global optima, resulting in a drift in local updates. Addressing such vulnerabilities and challenges is crucial before deploying FL systems in critical infrastructures.

In this dissertation, we present a multi-pronged approach to address the privacy, security, and robustness challenges in FL. This involves designing innovative privacy protection mechanisms and robust aggregation schemes to counter attacks during the training process. To address the privacy risk due to model or gradient interception, we present the design of a reliable and accountable blockchain-enabled privacy-preserving federated learning (PPFL) framework which leverages homomorphic encryption to protect individual client updates. The blockchain is adopted to support provenance of model updates during training so that malformed or malicious updates can be identified and traced back to the source. 

We studied the challenges in FL due to heterogeneous data distributions and found that existing FL algorithms often suffer from slow and unstable convergence and are vulnerable to poisoning attacks, particularly in extreme non-independent and identically distributed (non-IID) settings. We propose a robust aggregation scheme, named CONTRA, to mitigate data poisoning attacks and ensure an accuracy guarantee even under attack. This defense strategy identifies malicious clients by evaluating the cosine similarity of their gradient contributions and subsequently removes them from FL training. Finally, we introduce FL-GMM, an algorithm designed to tackle data heterogeneity while prioritizing privacy. It iteratively constructs a personalized classifier for each client while aligning local-global feature representations. By aligning local distributions with global semantic information, FL-GMM minimizes the impact of data diversity. Moreover, FL-GMM enhances security by transmitting derived model parameters via secure multiparty computation, thereby avoiding vulnerabilities to reconstruction attacks observed in other approaches. 

Degree: PhD Dissertation Defense (EE)
Degree Type: PhD Dissertation Defense
Degree Field: Electrical Engineering