对抗样本的一些参考文章和笔记

本文记录了自己准备写一篇介绍对抗样本的科普文章是在YouTube看Ian Goodfellow的一些视频资料学习做的笔记,以及在研究对抗样本(主要)和GAN时看到的技术博客的链接。

笔记是随意记的。

资料收集

博士答辩

在Ian博士答辩时总结到’Generative models useful for missing value problems’ in 2014. 和非监督学习.

解释maximum likelihood estimation
用模型描述事件发生的概率,模型的参数决定模型对不同事件赋予的概率。MLE是选择参数使来自数据集的事件的概率最大。

解释gradient descent
想要最小化一个目标函数/损失函数,对某个点,若导数为正,则往导数的反方向,直到导数为0.

在最后’over the next five years, hopefully we’ll learn how to do well without needing so much data, and we’ll be able to leverage the unsupervised learning techniques that have somewhat fallen out of fashion bu haven’t yet fully reached their potentail.’

Udacity talk

Ian talk from 3:50-

adversarial ML.
现在的ML assume train data 和 test data 量一样, 这个假设暗含着” there’s no opportunity for someone to interfere with the operation of the model”.

现在在不干预data的情况下, 许多app已经足够好了, human-level. 目标: harder problem, 使ML在别人攻击的情况下, 也能work well.

介绍 adversarial example, 其pervasive,对不同模型都有效. robust, 现实生活中, 即使相机拍摄的照片,角度光照不同, 亦或者是不同文件格式如jpeg压缩过的图像, 仍有效.

Ian 用了许多时间训练ML system(一般都是classifer)对attacker 是harder to break的.

提出GAN, 有趣的角度, train a classifer to decide input real or fake, 然后 try to fool that classifer.

equilibrium state = GAN 必须能生成和training data 一模一样的图像.

在训练的过程中, 这个过程也能帮助classifer more accurate.

Siraj to Ian: how come up the idea of GAN. (general your research idea)?(11:00-)

Ian: GAN’s origin : short story: argue with a friend in a bar

long story: 在和导师Joshua 讨论一个speech合成的比赛, 想找到一个方式评估每个entry of the speech systhesis contest, 有一个方式是写一个函数来衡量how likely certain data points are. 也就是说,一个好的speech synthesiser会给test data 打高likelihood. Ian 有个idae, 用discriminate network来看test data real or fake. 有了这种versus的概念, fool it. 正好他正在看很多关于adversarial examples的东西, 虽然最后他们没有办这个contest. 但是这个想法留下来. 在bar, clicked的是一种D和G同时学习(原来想的是fixed D)

general: for getting research ideas, it’s important to talk to a lot of different people, and think about a lot of different things, I’ve never really accomplished anything all that great by just sitting down and putting my nose to the grindstone and doing my main project. It’s important to do that main project because it sets you up for doing something great and don’t expect it. While you are working hard on your main project, you will learn a lot about the field and get a good idea of all the difficult problems are. And later you will be talking to someone else about sth completely unrelated and you’ll figure out how to the problem that comes up in that conversation.

这个talk讲了不少干货, 还是挺有用的.(30min+涉及NLP).

talk with Ng

about GAN: origin story一点细节, 在bar that night, Ian 和朋友讨论模型, 突然说, 你需要做这个这个和这个,一定能行. 朋友不行, 他但是本该在写花书. 然后他回家用了一个晚上, 第一个版本就成功了, 幸运的不用调超参数.

about near death experience: 以为brain hemorrhage, 在等待MRI结果是想到的是, 非常希望有人能实现他当时的想法. 才意识到他生命中非常重要的一件事是他的机器学习的研究工作.

future of GANs: 应用场景:半监督, 创作生成式模型, 模拟科学实验…
“it can be more of an art than science to really bring that performance out of them(有点像10年前人们眼中的DL,当时人们用boltzman machine, deep belief..,finicky过分讲究. 是relu, BN让DL边的靠谱)” 如果能让GANs变得和DL一样靠谱, 那么将会继续看到GANs在这些领域上的应用, 如果不能stablize GAN, 就会其他生成式模型取代.

on adversarial examples: ML security. It’s important to built security into a new technology naer the start of its development we

cs231n lecture

要理解梯度下降和反向传播

Ian 提到同事的实验,在尝试理解convnets的时候, 想把car 变成airplane, 通过gradient ascend of the log prob that the input is an airplane 往airplane的方向.(可以用反向传播计算input图片的梯度, 和计算parameter的梯度是一回事) 原以为图片会变化,比如背景变蓝, 或者car 长翅膀 ,以此理解convnets工作原理.

不止能fool NN, 还能fool LR, softmax R, SVM, DT, Nearest N.

Why happen:
原来以为是overfitting的结果, ae是trainingset没有覆盖到的部分, 而被误分了. 但事实不是的, 以为一个ae对许多不同的模型都有同样的效果.

现在考虑其实是underfitting. ae 来自模型的linearity. (这就很方便理解adversarial training其实是正则化的一种方式).

怎么造成ae, 真正的ae应该是从原class被误分到其他class但是只剩变化不大. 用模型的linearity build attack: Fast Gradient Sign Method.

clever hans story good opening?

how to defense?
很难defense. 生成式模型也不是很行.这里有点难(1:00:00-). adversarial training,能抵抗针对某种优化函数的攻击, 但泛化性能也不好.

virtual adversarial training. 用于semi-supervised learning

promising: universal engineering machine(model-based optimization). 如果能解ae, 就能解model-based optimization-下个函数描写一个不存在的东西,然后靠梯度下降和NN自己设计出来. new 基因,药物, 芯片等等.(1:16:00 elaborate)

总结
1. 以攻难守
2. 对抗式训练能正则化和半监督学习
3.

参考文献

vid:
1. IanGoodfellow PhD Defense Presentation
2. On Deep Learning with Ian Goodfellow, Andrew Trask, Kelvin Lwin, Siraj Raval and the Udacity Team
3. Heroes of Deep Learning: Andrew Ng interviews Ian Goodfellow
4. Lecture 16 | Adversarial Examples and Adversarial Training
5. Generative Adversarial Networks (GANs) - Computerphile
6. [知乎-fancy GAN应用]

blog
1. 一文详解深度神经网络中的对抗样本与学习
2. Attacking Machine Learning with Adversarial Examples
3. Breaking Linear Classifiers on ImageNet
4. Adversarial examples in deep learning
5. cleverhans-blog
6. Know Your Adversary: Understanding Adversarial Examples (Part 1/2)
7. The Modeler Strikes Back: Defense Strategies Against Adversarial Attacks (Part 2/2)
8. Image Completion with Deep Learning in TensorFlow
9. GAN: A Beginner’s Guide to Generative Adversarial Networks
10.
11. Glow: Better Reversible Generative Models

你可能感兴趣的:(Deep,Learning,Machine,Learning)