【论文阅读】QUEEN: Query Unlearning against Model Extraction(2024)

【论文阅读】QUEEN: Query Unlearning against Model Extraction(2024)_第1张图片

摘要

Model extraction attacks(模型提取攻击) currently pose a non-negligible threat(不可忽视的威胁) to the security(安全性) and privacy(隐私性) of deep learning models. By querying the model with a small dataset(通过小数据集查询模型) and using the query results as the ground-truth labels(真值标签), an adversary can steal a piracy model with performance comparable(性能相当) to the original model. Two key issues(两个关键问题) that cause the threat are, on the one hand(一方面), accurate(精度) and unlimited queries(无限制查询) can be obtained(获得) by the adversary; on the other hand(另一方面), the adversary can aggregate(聚合) the query results to train the model step by step(逐步训练模型). The existing defenses usual

你可能感兴趣的:(科研学习,模型窃取,论文阅读,提取攻击,模型安全)