隐私计算--35--联邦学习安全防御之同态加密

 一、Paillier半同态加密算法

同态加密又可以分为全同态加密、些许同态加密和半同态加密三种形式。这其中,由于受到性能等因素的约束,当前在工业界主要使用半同态加密算法。Paillier即属于半同态加密算法,其并不满足乘法同态运算,虽然Paillier算法不是全同态加密的,但是与全同态加密算法(FHE)相比,其计算效率大大提升,因此在工业界被广泛应用。

我们以 x 表示明文,以 [[x]] 表示其对应的密文,那么Paillier 半同态加密算法满足:[[u+v]] = [[u]] +[[v]]

对于Paillier算法的加密损失函数,损失值 L 关于参数 gif.latex?%5Ctheta 的梯度值为:gif.latex?%5Cfrac%7B%5Cpartial%20L%7D%7B%5Cpartial%20%5Ctheta%20%7D%20%3D%20%5Cfrac%7B1%7D%7Bn%7D%20%5Csum%20_%7Bi%3D1%7D%5En%20%28%5Cfrac%7B1%7D%7B4%7D%5Ctheta%20%5E%5Ctop%20x_i%20-%20%5Cfrac%7B1%7D%7B2%7Dy_i%29%20x_i

上式对应的加密梯度为:gif.latex?%5B%5B%5Cfrac%7B%5Cpartial%20L%7D%7B%5Cpartial%20%5Ctheta%20%7D%5D%5D%20%3D%20%5Cfrac%7B1%7D%7Bn%7D%20%5Csum%20_%7Bi%3D1%7D%5En%28%5Cfrac%7B1%7D%7B4%7D%5B%5B%5Ctheta%20%5E%5Ctop%20%5D%5Dx_i%20+%20%5Cfrac%7B1%7D%7B2%7D%5B%5B-1%5D%5Dy_i%29x_i,该式仅涉及加法和数乘运算,因此Paillier算法适用于经过多项式近似之后的损失函数求解。

二、同态加密防御的具体实现

2.1、定义模型

先自定义一个模型类 LR_Model,以方便我们进行加解密。已对代码做出了具体的注释说明,具体细节阅读代码即可。

models.py

import torch 
from torchvision import models
import numpy as np

def encrypt_vector(public_key, x):
	return [public_key.encrypt(i) for i in x]
	
def encrypt_matrix(public_key, x):
	ret = []
	for r in x:
		ret.append(encrypt_vector(public_key, r))
	return ret 
		
def decrypt_vector(private_key, x):
	return [private_key.decrypt(i) for i in x]

def decrypt_matrix(private_key, x):
	ret = []
	for r in x:
		ret.append(decrypt_vector(private_key, r))
	return ret 
		
class LR_Model(object):
	def __init__ (self, public_key, w_size=None, w=None, encrypted=False):
		# w_size: 权重参数数量
		# w: 是否直接传递已有权重,w和w_size只需要传递一个即可
		# encrypted: 是明文还是加密的形式
		self.public_key = public_key
		if w is not None:
			self.weights = w
		else:
			limit = -1.0/w_size 
			self.weights = np.random.uniform(-0.5, 0.5, (w_size,))
		
		if encrypted==False:
			self.encrypt_weights = encrypt_vector(public_key, self.weights)
		else:
			self.encrypt_weights = self.weights	
			
    # 用于更新加密的权重向量
	def set_encrypt_weights(self, w):
		for id, e in enumerate(w):
			self.encrypt_weights[id] = e 
		
    # 用于更新明文权重向量
	def set_raw_weights(self, w):
		for id, e in enumerate(w):
			self.weights[id] = e 
			

2.2、(客户端)本地模型训练

在本地的模型训练中,模型参数是在加密的状态下进行,其过程如下所示。已对代码做出了具体的注释说明,具体细节阅读代码即可。

client.py

import models, torch, copy
import numpy as np
from server import Server

class Client(object):
	def __init__(self, conf, public_key, weights, data_x, data_y):
		self.conf = conf
		self.public_key = public_key
		self.local_model = models.LR_Model(public_key=self.public_key, w=weights, encrypted=True)
		self.data_x = data_x
		self.data_y = data_y
		
	def local_train(self, weights):
        # 复制服务端下发的全局模型权重
		original_w = weights
        # 将本地模型的权重更新为全局模型权重
		self.local_model.set_encrypt_weights(weights)
		neg_one = self.public_key.encrypt(-1)
		
		for e in range(self.conf["local_epochs"]):
			print("start epoch ", e)
			# 每一轮都随机挑选batch_size大小的训练数据进行训练
			idx = np.arange(self.data_x.shape[0])
			batch_idx = np.random.choice(idx, self.conf['batch_size'], replace=False)
			x = self.data_x[batch_idx]
			x = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
			y = self.data_y[batch_idx].reshape((-1, 1))
            # 在加密状态下求取加密梯度
			batch_encrypted_grad = x.transpose() * (0.25 * x.dot(self.local_model.encrypt_weights) + 0.5 * y.transpose() * neg_one)
			encrypted_grad = batch_encrypted_grad.sum(axis=1) / y.shape[0]
			
			for j in range(len(self.local_model.encrypt_weights)):
				self.local_model.encrypt_weights[j] -= self.conf["lr"] * encrypted_grad[j]

		weight_accumulators = []
		for j in range(len(self.local_model.encrypt_weights)):
			weight_accumulators.append(self.local_model.encrypt_weights[j] - original_w[j])
		
		return weight_accumulators

2.3、(服务端)生成公钥和私钥

已对代码做出了具体的注释说明,具体细节阅读代码即可。

server.py

import models, torch
import paillier
import numpy as np

class Server(object):
    # 利用paillier算法生成公钥和私钥,公钥用于加密,私钥用于解密
	public_key, private_key = paillier.generate_paillier_keypair(n_length=1024)
	def __init__(self, conf, eval_dataset):
		self.conf = conf 
		self.global_model = models.LR_Model(public_key=Server.public_key, w_size=self.conf["feature_num"]+1)
		self.eval_x = eval_dataset[0]
		self.eval_y = eval_dataset[1]
		
	def model_aggregate(self, weight_accumulator):
		for id, data in enumerate(self.global_model.encrypt_weights):
			update_per_layer = weight_accumulator[id] * self.conf["lambda"]
			self.global_model.encrypt_weights[id] = self.global_model.encrypt_weights[id] + update_per_layer
	
	def model_eval(self):
		total_loss = 0.0
		correct = 0
		dataset_size = 0
		batch_num = int(self.eval_x.shape[0]/self.conf["batch_size"])
		self.global_model.weights = models.decrypt_vector(Server.private_key, self.global_model.encrypt_weights)
		print(self.global_model.weights)
	
		for batch_id in range(batch_num):
			x = self.eval_x[batch_id*self.conf["batch_size"] : (batch_id+1)*self.conf["batch_size"]]
			x = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
			y = self.eval_y[batch_id*self.conf["batch_size"] : (batch_id+1)*self.conf["batch_size"]].reshape((-1, 1))

			dataset_size += x.shape[0]
			wxs = x.dot(self.global_model.weights)
			pred_y = [1.0 / (1 + np.exp(-wx)) for wx in wxs]	
			pred_y = np.array([1 if pred > 0.5 else -1 for pred in pred_y]).reshape((-1, 1))
			correct += np.sum(y == pred_y)

		acc = 100.0 * (float(correct) / float(dataset_size))
		return acc
        
	# 对数据进行重新加密
    # 先利用paillier生成私钥解密,再利用公钥重新加密
	@staticmethod
	def re_encrypt(w):
		return models.encrypt_vector(Server.public_key, models.decrypt_vector(Server.private_key, w))

你可能感兴趣的:(数据安全与隐私计算,联邦学习,迁移学习,横向联邦学习,人工智能,纵向联邦学习)