【论文阅读】一种包含同态加密、差分隐私、多方安全计算的FL隐私保护框架 Efficient and Privacy-Enhanced Federated Learning for Industrial

本文来自:Efficient and Privacy-Enhanced Federated Learning for Industrial Artificial Intelligence

本文提出了一种包含同态加密、差分隐私、多方安全计算的FL隐私保护框架。

目录

  • FL
  • related work
  • preliminaries
    • Deep learning 的差分隐私
  • Methodology

FL

大家都知道,就不写了~

related work

首个FL 隐私保护框架:
Shokri and Shmatikov [18] proposed the first privacy-preserving distributed learning system, where participants selectively share small part of the gradients to ensure the privacy of training data.

1.secure multiparty computation (SMC) 安全多方计算:
By exploiting secure multiparty computation (SMC), Bonawitz
et al. [20] presented a privacy-preserving protocol to support
the secure aggregation in FL.

存在问题: SMC is obviously not suitable for our solution since we require a noninteractive protocol to perform secure aggregation.

2.homomorphic encryption (HE) 同态加密:
Zhang et al. [21] also solved the same problem by combining the threshold secret sharing and homomorphic encryption (HE) scheme.

存在问题: However, this is vulnerable if there are multiple entities
colluding during the training.

但是,上述方法需要 multiple communication rounds in each aggregation. 通信消耗大了!

3.此外,除了上述两种,还有一种方法 DP:

存在问题: However, traditional DP [23] may not be suitable for FL, since a trusted third party is required to add noises to the statistical result. This is contrary to the reality that the CS is generally considered dishonest.

preliminaries

Deep learning 的差分隐私

【论文阅读】一种包含同态加密、差分隐私、多方安全计算的FL隐私保护框架 Efficient and Privacy-Enhanced Federated Learning for Industrial_第1张图片

Methodology

【论文阅读】一种包含同态加密、差分隐私、多方安全计算的FL隐私保护框架 Efficient and Privacy-Enhanced Federated Learning for Industrial_第2张图片
【论文阅读】一种包含同态加密、差分隐私、多方安全计算的FL隐私保护框架 Efficient and Privacy-Enhanced Federated Learning for Industrial_第3张图片

你可能感兴趣的:(论文阅读,机器学习,人工智能,深度学习)