基于Keras的自动驾驶技术的车轮转向角度的可视化

基于Keras的自动驾驶技术的车轮转向角度的可视化

This post is about understanding how a self driving deep learning network decides to steer the wheel.

 

NVIDIA published a very interesting paper(https://arxiv.org/pdf/1604.07316.pdf), that describes how a deep learning network can be trained to steer a wheel, given a 200x66 RGB image from the front of a car.
This repository(https://github.com/SullyChen/Nvidia-Autopilot-TensorFlow) shared a Tensorflow implementation of the network described in the paper, and (thankfully!) a dataset of image / steering angles collected from a human driving a car.
The dataset is quite small, and there are much larger datasets available like in the udacity challenge.
However it is great for quickly experimenting with these kind of networks, and visualizing when the network is overfitting is also interesting.
I ported the code to Keras, trained a (very over-fitting) network based on the NVIDIA paper, and made visualizations.

I think that if eventually this kind of a network will find use in a real world self driving car, being able to debug it and understand its output will be crucial.
Otherwise the first time the network decides to make a very wrong turn, critics will say that this is just a black box we don’t understand, and it should be replaced!

First attempt : Treating the network as a black box - occlusion maps
基于Keras的自动驾驶技术的车轮转向角度的可视化_第1张图片
基于Keras的自动驾驶技术的车轮转向角度的可视化_第2张图片
The first thing we will try, won’t require any knowledge about the network, and in fact we won’t peak inside the network, just look at the output.
We”l create an occlusion map for a given image, where we take many windows in the image, mask them out, run the network, and see how the regressed angle changed.
If the angle changed a lot - that window contains information that was important for the network decision.
We then can assign each window a score based on how the angle changed!

We need to take many windows, with different sizes - since we don’t know in advance the sizes of important features in the image.

Now we can make nice effects like filtering the occlusion map, and displaying the focused area on top of a blurred image:
基于Keras的自动驾驶技术的车轮转向角度的可视化_第3张图片

链接(需FQ):
http://jacobcv.blogspot.jp/2016/10/visualizations-for-regressing-wheel.html

代码链接:
https://github.com/jacobgil/keras-steering-angle-visualizations

原文链接:
http://weibo.com/5501429448/EeBRKc9pl?ref=collection&type=comment

posted on 2017-02-16 15:15 sonictl 阅读( ...) 评论( ...) 编辑 收藏

转载于:https://www.cnblogs.com/sonictl/p/6405867.html

你可能感兴趣的:(人工智能)