【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)

【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)

目录

  • 【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)
    • 主要参考Reference
    • 0 课前准备 Prerequisites
      • 0.1 CV基础
      • Visualize Convolutional Neural Networks (CNN)
    • Reference

主要参考Reference

  • CS231N Course by Stanford by Feifei Li, Jiajun Wu, Ruohan Gao
    • Course Notes
    • Course Slides
    • Course Work
    • Youtube Videos
    • 中文版精讲(同济子豪兄
    • 知乎专栏
  • Interpretable Machine Learning
  • Interpretable Machine Learning Open Cource by Datawhale
    • B站视频

0 课前准备 Prerequisites

0.1 CV基础

Based on Lecture 1 of CS231N
【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)_第1张图片

  1. 计算机视觉的历史
    1. 寒武纪大爆发:产生眼镜
    2. 光学:墨子的小孔成像到达芬奇的针筒
    3. Hubel & Wiesel, Havard 1959 视觉神经的成像原理
    4. Larry Roberts 1963 Block World 边缘检测
    5. AI Group MIT 1966 Project Mac
    6. Stage of Visual Representation, David Marr, 1970s
    7. Generalized Cylinder, Pictorial Structure 弹簧和圆柱体模型 斯坦福
    8. Normalized Cut, Shi & Malek 抠图
    9. Face Detection by Adaboost, 2001, Viola & Jones
    10. 2000s 手动特征工程对图像解码
      1. SFT特征
      2. 空间金字塔特征
      3. 方向梯度特征
    11. 2010s Image Net
      1. 2012 AlexNet
      2. 2014VGG—2014GoogleNet—2015ResNet—Now
  2. 计算机视觉的应用
    1. 算法+数据+算力
    2. 语义分割
    3. Image cpationing
    4. 人体姿态估计
    5. 三维重建
    6. 医疗,VR/XR

Visualize Convolutional Neural Networks (CNN)

Based on Lecture 8 of CS231N

Visualizing what models have learned

  • Visualize Filters

    • visualize weights (gradients)
    • Krizhevsky, “One weird trick for parallelizing convolutional neural networks”, arXiv 2014
    • He et al, “Deep Residual Learning for Image Recognition”, CVPR 2016
    • Huang et al, “Densely Connected Convolutional Networks”, CVPR 2017
    • 【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)_第2张图片
  • Visualizing final layer features

    • visualize representation space

    • 【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)_第3张图片

    • Van der Maaten and Hinton, “Visualizing Data using t-SNE”, JMLR 2008

  • Visualizing activations

    • Yosinski et al, “Understanding Neural Networks Through Deep Visualization”, ICML DL Workshop 2014.

Understanding input pixels

  • Identifying important pixels
    • visualize patches that maximally activate neurons
    • Mask part of the image before feeding to CNN, check how much predicted probabilities change
      Zeiler and Fergus, “Visualizing and Understanding Convolutional Networks”, ECCV 2014
      • Zeiler and Fergus, “Visualizing and Understanding Convolutional Networks”, ECCV 2014
  • Saliency via backprop
    • Simonyan, Vedaldi, and Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps”, ICLR Workshop 2014.
    • Saliency Maps: Segmentation without supervision
      • Ribeiro et al, ““Why Should I Trust You?” Explaining the Predictions of Any Classifier”, ACM KDD 2016
  • Guided backprop to generate images
    • Zeiler and Fergus, “Visualizing and Understanding Convolutional Networks”, ECCV 2014 Springenberg et al,

    • “Striving for Simplicity: The All Convolutional Net”, ICLR Workshop 2015

    • 【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)_第4张图片

    • Intermediate features via (guided) backprop

    • Zeiler and Fergus, “Visualizing and Understanding Convolutional Networks”, ECCV 2014

    • Springenberg et al, “Striving for Simplicity: The All Convolutional Net”, ICLR Workshop 2015

    • Figure copyright Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller, 2015; reproduced with permission.

  • Gradient ascent to visualize features
    • Generate a synthetic image that maximally activates a neuron
      • a r g m a x I S c ( I ) − λ ∣ ∣ I ∣ ∣ 2 arg{max}_{I} S_c(I) - \lambda ||I||^2 argmaxISc(I)λI2
        • S_c is the score for class C
        • Penalize L2 norm of image; also during optimization periodically
      • Initialize image to zeros
      • Repeat:
      1. Forward image to compute current scores
      2. Backprop to get gradient of neuron value with respect to image pixels
      3. Make a small update to the image
    • Simonyan, Vedaldi, and Zisserman, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps”, ICLR Workshop 2014.

Adversarial perturbations

  • Fooling Images / Adversarial Examples
    • Moosavi-Dezfooli, Seyed-Mohsen, et al. “Universal adversarial perturbations.” Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
  • Start from an arbitrary image
  • Pick an arbitrary class
  • Modify the image to maximize the class (4) Repeat until network is fooled

Style transfer

  • Features inversion
  • 【机器学习笔记】可解释机器学习-学习笔记 Interpretable Machine Learning (Deep Learning)_第5张图片 - Mahendran and Vedaldi, “Understanding Deep Image Representations by Inverting Them”, CVPR 2015
  • Deep dream
    • Rather than synthesizing an image to maximize a specific neuron, instead
      try to amplify the neuron activations at some layer in the network
    • Choose an image and a layer in a CNN; repeat:
      1. Forward: compute activations at chosen layer
      2. Set gradient of chosen layer equal to its activation
      3. Backward: Compute gradient on image
      4. Update image
  • Texture synthesis
    • Wei and Levoy, “Fast Texture Synthesis using Tree-structured Vector Quantization”, SIGGRAPH 2000 Efros and Leung, “Texture Synthesis by Non-parametric Sampling”, ICCV 1999
  • Neural style transfer
    • Problem: Style transfer requires many forward / backward passes through VGG; very slow!
    • Solution: Train another neural network to perform style transfer for us!
      1. Train a feedforward network for each style
    • (2) Use pretrained CNN to compute same losses as before
    • (3) After training, stylize images using a single forward pass
    • Johnson, Alahi, and Fei-Fei, “Perceptual Losses for Real-Time Style Transfer and Super-Resolution”, ECCV 2016

Reference

  • CS231N Course by Stanford by Feifei Li, Jiajun Wu, Ruohan Gao
    • Course Notes
    • Course Slides
    • Course Work
    • Youtube Videos
    • 中文版精讲(同济子豪兄
    • 知乎专栏
  • Interpretable Machine Learning
  • Interpretable Machine Learning Open Cource by Datawhale
    • B站视频

你可能感兴趣的:(机器学习,学习,深度学习)