1.【博客】Playing a toy poker game with Reinforcement Learning
简介:
Reinforcement learning (RL) has had some high-profile successes lately, e.g. AlphaGo, but the basic ideas are fairly straightforward. Let’s try RL on our favorite toy problem: the heads-up no limit shove/fold game. This is a pedagogical post rather than a research write-up, so we’ll develop all of the ideas (and code!) more or less from scratch. Follow along in a Python3 Jupyter notebook!
原文链接:http://willtipton.com/coding/poker/2017/06/06/shove-fold-with-reinforcement-learning.html
2.【论文】SuperSpike: Supervised learning in multi-layer spiking neural networks
简介:
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns.
原文链接:https://arxiv.org/pdf/1705.11146.pdf
3.【博客】What Can't Deep Learning Do?
简介:
1/ What can’t deep learning do? Worth putting together a list of known failures to guide algorithmic development.
2/ Deep learning methods are known to fail at learning after small jitters to input. Think object recognition breaking when colors are swapped.
3/ Gradient based learning is quite slow. Takes many, many gradient descent steps to pick up patterns. Tough for high dimensional prediction.
4/ Deep learning methods are terrible at handling constraints. Not possible to find solutions satisfying constraints unlike linear programming.
5/ Training for complex models is quite unstable. Neural turing machines and GANs often don’t train well, with heavy dependence on rand seed.
......
原文链接:http://rbharath.github.io/what-cant-deep-learning-do/
4.【博客】8 Benefits of Customer Service Chatbots
简介:
We have all experienced the benefits and convenience of getting things done with just a tap on our phones. In today’s on-demand economy, our consumer expectations are higher than ever. If we don’t find answers or a resolution to our problems right away, we can easily move to the next brand. As a result, customer service departments play a key role for client retention and customer brand loyalty.
原文链接:https://blog.azumo.co/8-benefits-of-customer-service-chatbots-8c1b32e04096
5.【博客】Real-Time Stable Style Transfer for Videos
简介:
The paper A Neural Algorithm of Artistic Style by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge presents a technique for learning a style and applying it to other images. When used frame-by-frame on movies, the resulting stylized animations are of low quality. They suffer from extensive “popping”. We refer to popping as stylization features that are inconsistent from frame to frame. The stylized features (lines, strokes, colors) are present one frame but gone the next frame. The ‘artistic style transfer for videos’ video clearly shows the popping.
原文链接:https://elementai.github.io/research/2017/04/05/stable-style-transfer.html