Session with Yoshua Bengio 深度学习 回答 2 则

Neil Zhu,ID Not_GOD,University AI 创始人 & Chief Scientist,致力于推进世界人工智能化进程。制定并实施 UAI 中长期增长战略和目标,带领团队快速成长为人工智能领域最专业的力量。
作为行业领导者,他和UAI一起在2014年创建了TASA(中国最早的人工智能社团), DL Center(深度学习知识中心全球价值网络),AI growth(行业智库培训)等,为中国的人工智能人才建设输送了大量的血液和养分。此外,他还参与或者举办过各类国际性的人工智能峰会和活动,产生了巨大的影响力,书写了60万字的人工智能精品技术内容,生产翻译了全球第一本深度学习入门书《神经网络与深度学习》,生产的内容被大量的专业垂直公众号和媒体转载与连载。曾经受邀为国内顶尖大学制定人工智能学习规划和教授人工智能前沿课程,均受学生和老师好评。

Quora 源地址

Yoshua Bengio: 深度学习的研究去向何处?

研究的定义就是探索,这意味着 (a) 我们不知道什么会有用 (b) 我们需要探索更多的道路,需要在科学社区中有大量具有多样性的研究方向。所以我只能告诉你当前根据自己的审美直觉关于重要挑战机遇的感受和愿景。下面是几点:

  • 无监督学习至关重要,但我们目前还做得不太正确(我自己和其他的研究者已经写过很多论述,讨论如何修正这点的想法)

  • 深度学习研究可能会继续从传统的模式识别任务到全面的 AI 任务的扩展,包括符号操作、记忆、规划和推理。这对于全面理解自然语言和与人对话(例如,通过 Turing 测试)相当重要。同样地,我们也看到了深度学习在强化学习,控制和机器人中的渗入,现在正刚刚开始。

  • 对于 AI,我们可能仍然需要对大脑的机制有更好的理解,尝试给大脑的运行找到机器学习的解释。

  • 最大似然可以被进一步提升,它并不是必须的最好的目标函数,特别是在复杂的高维度空间上的学习时(正如在无监督学习和结构化输出场景中那样)

  • 基于深度学习对 AI 的需求(不仅仅是消费者产品)将会从计算能力的提升上大大获益,这意味着专门的硬件;这也是因为 AI 需要关于世界大量知识(及推理能力),也就需要在非常巨大的数据集上训练出来的非常大的数据集,而所有这一切都需要超过现在很多的计算能力才能实现。


Research is by definition exploratory, which means that (a) we do not know what will work and (b) we need to explore many paths, we need a lot of diversity of research directions in the scientific community. So I can only tell you about my current gut feelings and visions of where I see important challenges and opportunities that appeal to my personal aesthetics and instincts. Here are some elements of this:

  • unsupervised learning is crucial and we do not do it right yet (there are many arguments I and others have written and talked about to justify this)
  • deep learning research is likely to continue its expansion from traditional pattern recognition jobs to full-scale AI tasks involving symbolic manipulation, memory, planning and reasoning. This will be important for reaching to full understanding of natural language and dialogue with humans (i.e., pass the Turing test). Similarly, we are seeing deep learning expanding into the territories of reinforcement learning, control and robotics and that is just the beginning.
  • for AI, we probably still have a lot to gain from a better understanding of the brain and trying to find machine learning explanations for what brains are doing
  • maximum likelihood can be improved upon, it is not necessarily the best objective when learning in complex high-dimensional domains (as arises in unsupervised learning and structured output scenarios)
  • the quest for AI based on deep learning (and not just consumer products) will greatly benefit from substantial increases in computational capabilities, which probably means specialized hardware; this is because AI requires lots of knowledge about the world (and reasoning about it), which requires large models trained over very large datasets and this all requires much more computing power than we currently use.
    See also my answers to the "open research areas" question.

Yoshua Bengio: 大脑运行机制和深度学习之间双向理解重要性

正如很多在早期进行神经网络研究的科学家(包括我的同事 Geoff Hinton 和 Yann LeCun)那样,我相信现在我们有一个美好的机会来学习某种在我们思考大脑已知信息时可以构建 AI 的东西,并且因为神经科学家正收集关于大脑越来越多的数据,这变得越来越真实。

这个信念也和反过来的想法关联,也就是说为了真的理解为何大脑让我们变得只能的核心原因,我们需要构造出一个关于大脑中发生了什么的“机器学习”解释,即大脑如何学习这么复杂的事物并执行如此成功的 credit 分配的计算和数学上的解释。

为了验证这种解释,我们必须得运行一个机器学习算法根据一致的根本原理来执行,抽象掉对理解这些原理没有必要的神经生物学的元素(但是可能对真实大脑的实现有用,或者提供额外的人类天生具备的知识)。

就我目前所知,我们没有一个确切的关于大脑如何达到某些反向传播做得相当好的任务,例如,弄清楚内部神经元突出如何需要变化来让大脑作为整体产生更好的关于这个世界的认识和更好的行为机器学习解释。这是我最近最为关心的研究话题之一。


Like many of those who did research on neural networks in the early days (including my colleagues Geoff Hinton and Yann LeCun), I believe that we have a beautiful opportunity to learn something useful for building AI when we consider what is known about the brain, and this becomes more and more true as neuroscientists are collecting more and more data about the brain. This belief is associated with the reverse idea, that in order to really understand the core reasons why brains allow us to be intelligent, we need to construct a "machine learning" interpretation of what is happening in a brain, meaning a computational and mathematical explanation of how our brains can learn so complex things and perform so successful credit assignment. To validate this interpretation we should be able to run a machine learning algorithm that runs according to the same fundamental principles, abstracting out the elements of neurobiology that are not necessary to understand these principles (but maybe necessary to implement them in the brain, or to provide the additional innate knowledge that we are born with). As far as I know, we do not have a credible machine learning interpretation of how brains could do something that backprop does apparently very well, i.e., figure out how the synapses of internal neurons should change so as to make the brain as a whole produce a better understanding of the world and better behaviour. It is one of the topics that most often occupies my mind, these days.

你可能感兴趣的:(Session with Yoshua Bengio 深度学习 回答 2 则)