【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)

【论文阅读】APMSA: Adversarial Perturbation Against Model Stealing Attacks(2023)_第1张图片

摘要

Training a Deep Learning (DL) model(训练深度学习模型) requires proprietary data(专有数据) and computing-intensive resources(计算密集型资源). To recoup their training costs(收回训练成本), a model provider can monetize DL models through Machine Learning as a Service (MLaaS 机器学习即服务). Generally, the model is deployed at the cloud, while providing a publicly accessible(公开访问) Application Programming Interface (API 应用程序接口) for paid queries to obtain benefits(服务查询获得好处). However, model stealing attacks(模型窃取攻击) have posed security threats(安全威胁) to this model monetizing scheme as they steal the model without paying for future extensive queries. Specifically(具体来说), an adversary queries a targeted model(查询目标模型) to obtain input-output pairs(获取

你可能感兴趣的:(科研学习,模型窃取,论文阅读,模型窃取,防御,对抗性扰动)