【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting

个人阅读笔记,如有错误欢迎指出

Arxiv 2019        [1912.11464] Attack-Resistant Federated Learning with Residual-based Reweighting (arxiv.org)

问题:

        联邦学习容易受到后门攻击

创新:

        提出一种基于残差的重新加权聚合算法

        聚合算法将重复中值回归和加权最小二乘中的加权方案相结合

方法:

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第1张图片

        1)用重复中值估计回归线y=\beta_{n0}+\beta_{n1}x

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第2张图片

        2)计算本地模型中第n个参数的残差,由于对于不同的参数r_n不好比较,因此将r_n标准化

        3)参数置信度。其中w_N^{(k)}M^{(k)}中的第n个参数,\Psi(x)=max(-Z,min(Z,x))Z=\lambda\sqrt{2/K}\lambda是超参数本文中定义为2,\Psi是置信空间用\lambda调整,h_{kk}​是H_n中的第k个对角矩阵

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第3张图片

        4)极值矫正。若存在非常大的值即使乘很小的权重也足以威胁到全局模型,引入阈值\delta,若置信值低于\\delta,则用下式矫正

        5)本地模型权重。通过衡量每个参数的标准差进行聚合

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第4张图片

        6)全局模型

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第5张图片

        整体算法思路:通过垂直距离(残差)将每个参数重新加权,然后通过累积每个局部模型中的参数置信度来估计每个局部模型的权重

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第6张图片

实验:

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第7张图片

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第8张图片

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第9张图片

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第10张图片

【论文阅读笔记】Attack-Resistant Federated Learning with Residual-based Reweighting_第11张图片

读后感:

        优点:

                可以防御放大梯度的后门攻击

                可以防御标签翻转的后门攻击

        局限性:

                牺牲了一定的隐私,与安全聚合策略不兼容

你可能感兴趣的:(论文笔记,论文阅读,笔记,安全,人工智能,深度学习)