Understanding and Mitigating Security Risks towards Trustworthy Deep Learning Systems
Drew Davidson
Bo Luo
Zijun Yao
Xinmai Yang
Deep learning is widely used in healthcare, finance, and other critical domains, raising concerns about system trustworthiness. However, deep learning models and data still face three types of critical attacks: model theft, identity impersonation, and abuse of AI-generated content (AIGC). To address model theft, homomorphic encryption has been explored for privacy-preserving inference, but it remains highly inefficient. To counter identity impersonation, prior work focuses on detection, disruption, and tracing—yet fails to protect source and target images simultaneously. To prevent AIGC abuse, methods like evaluation, watermarking, and machine unlearning exist, but text-driven image editing remains largely unprotected.
This report addresses the above challenges through three key designs. First, to enable privacy-preserving inference while accelerating homomorphic encryption, we propose PrivDNN, which selectively encrypts the most critical model parameters, significantly reducing encrypted operations. We design a selection score to evaluate neuron importance and use a greedy algorithm to iteratively secure the most impactful neurons. Across four models and datasets, PrivDNN reduces encrypted operations by 85%–98%, and cuts inference time and memory usage by over 97% while preserving accuracy and privacy. Second, to counter identity impersonation in deepfake face-swapping, where both the source and target can be exploited, we introduce PhantomSeal, which embeds invisible perturbations to encode a hidden “cloak” identity. When used as a target, the resulting content displays visible artifacts; when used as a source, the generated deepfake is altered to resemble the cloak identity. Evaluations across two generations of deepfake face-swapping show that PhantomSeal reduces attack success from 97% to 0.8%, with 95% of outputs recognized as the cloak identity, providing robust protection against manipulation. Third, to prevent AIGC abuse, we construct a comprehensive dataset, perform large-scale human evaluation, and establish a benchmark for detecting AI-generated artwork to better understand abuse risks in AI-generated content. Building on this direction, we propose Protecting Copyright against Image Editing (PCIE) to address copyright infringement in text-driven image editing. PCIE embeds an invisible copyright mark into the original image, which transforms into a visible watermark after text-driven editing to automatically reveal ownership upon unauthorized modification.