搜索论文: Model-Protected Multi-Task Learning
搜索论文: http://www.studyai.com/search/whole-site/?q=Model-Protected+Multi-Task+Learning
Task analysis; Covariance matrices; Privacy; Security; Data models; Resource management; Multi-task learning; model protection; differential privacy; covariance matrix; low-rank subspace learning
机器学习
多任务学习
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together.
多任务学习(Multi task learning,MTL)是指同时学习多个相关任务的范式。.
In contrast, in single-task learning (STL) each individual task is learned independently.
相比之下,在单任务学习(STL)中,每个单独的任务都是独立学习的。.
MTL often leads to better trained models because they can leverage the commonalities among related tasks.
MTL通常会导致更好的训练模型,因为它们可以利用相关任务之间的共性。.
However, because MTL algorithms can “leak” information from different models across different tasks, MTL poses a potential security risk.
然而,由于MTL算法可以在不同的任务中“泄漏”来自不同模型的信息,因此MTL带来了潜在的安全风险。.
Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task.
具体而言,对手可以通过一项任务参与MTL过程,从而获取另一项任务的模型信息。.
The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods.
先前提出的隐私保护MTL方法保护的是数据实例而不是模型,其中一些方法的性能可能不如STL方法。.
In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix.
在本文中,我们基于模型矩阵协方差矩阵的扰动,提出了一个保护隐私的MTL框架,以防止每个模型的信息泄漏到其他模型。.
We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix.
我们研究了两种流行的MTL实例化方法,即学习模型矩阵的低秩和群稀疏模式。.
Our algorithms can be guaranteed not to underperform compared with STL methods.
我们的算法可以保证不低于STL方法。.
We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered.
我们基于用于区分隐私的工具构建我们的方法,并提供隐私保证、效用界限,以及考虑异构隐私预算。.
The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem…
实验表明,在所提出的模型保护问题上,我们的算法优于现有的隐私保护MTL方法构造的基线方法。。.
[‘Jian Liang’, ‘Ziqi Liu’, ‘Jiayu Zhou’, ‘Xiaoqian Jiang’, ‘Changshui Zhang’, ‘Fei Wang’]