1, 故事会: 神经网络简史: http://blog.sina.cn/dpool/blog/s/blog_71329a960102v1eo.html?type=-1
摘录几句:
自图灵提出“机器与智能”,一直就有两派观点,一派认为实现人工智能必须用逻辑和符号系统,这一派看问题是自顶向下的;还有一派认为通过仿造大脑可以达到人工智能,这一派是自底向上的,他们认定如果能造一台机器,模拟大脑中的神经网络,这台机器就有智能了。前一派,我想用“想啥来啥”来形容;后一派就称之为“吃啥补啥”
Hebb规则:如果两个细胞总是同时激活的话,它们之间就有某种关联,同时激活的概率越高,关联度也越高。
换句话说,就是“吃啥补啥”。
自己理解,就是符号派与连接派的分歧;但是当前模仿链接和神经网络的方式,实测效果很好,就作为产业界的de facto方法了;另外,hinton是在2006年提出的deep learning啊,和我当初选的那个方向的时间点,还是查不了太多,那我现在还是能够赶的上的啊,哈哈哈;
2,Leslie Valiant 分享的一份思考,正好昨天与好友脑洞大开的聊到了好多,关于人脑机制观察+猜测到整个互联网的宏观表征以及大脑相似度的对比理解;还是很有意思, 记录于此: http://www.asianscientist.com/2016/01/features/biologial-evolution-machine-learning-similar-turing-award-winner-leslie-valiant/
首先是对于进化速度的质疑,因为从当前已知的20种氨基酸(amino acids)合成构造出单个物种个体,需要的步骤是一个天文数字;如何快速量化和完成这一过程,Valiant表示方法未知;或者自己还不知道;亦即未知或无知;
"For Valiant, who is well known in the field of artificial intelligence for developing the probably approximately correct (PAC) model of machine learning, the complexities of biology have striking parallels to his own discipline. In fact, he believes that looking at the theory through the lens of machine learning may even finally place evolution on a quantitative footing. "
最直接的表述: “What I want to do now is to persuade you that evolution is more similar to machine learning than one would have thought. The analogy I’ll make is very simple: the genome is the hypothesis, and the examples are experiences,” Valiant said. “As the algorithm evolves, it generates new hypotheses in the next generation, which can be thought of as offspring with random mutations in their DNA. ”
以及:“Although evolution seems very unsupervised, it also has the notion of correctness just like machine learning. The supervision is survival, providing feedback on whether each organism survives or not. ”
但是我不清楚这里的bad/good到底是什么,但是 it works:“But the fact that it is bad will be fed back into the evolutionary process and your evolutionary algorithm will be learnt. Put simply, our genomes will learn from our experiences, ”
哈哈哈,这个总结很逗:“The summary is that biological evolution is just a type of machine learning and the only problem is that the training data has been lost, ”