谷歌 AI 负责人谈2020 年机器学习趋势:多任务和多模态会有大突破

​在上周加拿大温哥华举行的NeurIPS会议上,机器学习成为了中心议题。

来自世界范围内约1.3万名研究人员集中探讨了神经科学、如何解释神经网络输出以及人工智能如何帮助解决现实世界中的重大问题等焦点话题。

会议期间,谷歌 AI 负责人Jeff Dean接受了媒体VentureBeat的专访,并畅谈了其对于2020年机器学习趋势的相关看法,Jeff Dean认为:

2020年,机器学习领域在多任务学习和多模态学习上将会有大突破,同时新出现的设备也会让机器学习模型的作用更好地发挥出来。

以下截取了部分采访的英文原文,并简要进行了翻译:

1.谈AI芯片

VentureBeat:What do you think are some of the things that in a post-Moore’s Lawworld people are going to have to keep in mind?

您认为在后摩尔定律世界中,人们需要牢记哪些事情?

Jeff Dean:Well I think one thing that’s been shown to be pretty effective is specialization of chips to do certain kinds of computation that you want to do that are not completely general purpose, like a general-purpose CPU. So we’ve seen a lot of benefit from more restricted computational models, like GPUs or even TPUs, which are more restricted but really designed around what ML computations need to do. And that actually gets you a fair amount of performance advantage, relative to general-purpose CPUs. And so you’re then not getting the great increases we used to get in sort of the general fabrication process improving your year-over-year substantially. But we are getting significant architectural advantages by specialization.

我认为用专门的芯片而不是用CPU来做非通用的计算,已经被证明非常有效。TPU或者GPU,虽然有诸多限制,但它们是围绕着机器学习计算的需要而设计的,这比通用GPU有更大的性能优势。

因此,我们很难看到过去那种算力的大幅度增长,但我们正在通过专业化,来获得更大的架构优势。

2.谈机器学习

VentureBeat:You also got a little into the use of machine learning for the creation of machine learning hardware. Can you talk more about that?

对机器学习在创建机器学习硬件方面的应用,您能详细说说吗?

Jeff Dean:Basically, right now in the design process you have design tools that can help do some layout, but you have human placement and routing experts work with those design tools to kind of iterate many, many times over. It’s a multi-week process to actually go from the design you want to actually having it physically laid out on a chip with the right constraints in area and power and wire length and meeting all the design roles or whatever fabrication process you’re doing.

So it turns out that we have early evidence in some of our work that we can use machine learning to do much more automated placement and routing. And we can essentially have a machine learning model that learns to play the game of ASIC placement for a particular chip.

基本上,现在在设计过程中,一些工具可以帮助布局,但也需要人工布局和布线专家,从而可以使用这些设计工具进行多次重复的迭代。

从你想要的设计开始,到布局在芯片上,并在面积、功率和导线长度方面有适当的限制,同时还要满足所有设计角色或正在执行的任何制造过程,这通常需要花费数周的时间。

所以事实证明,在一些工作中,我们可以使用机器学习来做更多的自动布局和布线。

我们基本上可以有一个机器学习模型,去针对特定芯片玩ASIC放置的游戏。我们内部一直在试验的一些芯片上,这也取得了不错的结果。

3.谈谷歌挑战

VentureBeat: What do you feel are some of the technical or ethical challenges for Google in the year ahead?

您认为谷歌在未来一年面临哪些技术或伦理上的挑战?

Jeff Dean:In terms of AI or ML, we’ve done a pretty reasonable job of getting a process in place by which we look at how we’re using machine learning in different product applications and areas consistent with the AI principles. That process has gotten better-tuned and oiled with things like model cards and things like that. I’m really happy to see those kinds of things. So I think those are good and emblematic of what we should be doing as a community.

And then I think in the areas of many of the principles, there [are] real open research directions. Like, we have kind of the best known practices for helping with fairness and bias and machine learning models or safety or privacy. But those are by no means solved problems, so we need to continue to do longer-term research in these areas to progress the state of the art while we currently apply the best known state-of-the-art techniques to what we do in an applied setting.

就AI或机器学习而言,我们已经完成了一个相当合理的工作,并建立了一个流程。通过该流程,我们可以了解如何在与AI原理一致的不同产品应用和领域中使用机器学习。该过程已经得到了更好的调整,并通过模型卡之类的东西进行了优化。

然后,我认为在许多原则领域中,存在真正的开放研究方向,可以帮助我们解决公平和偏见以及机器学习模型或安全性或隐私问题。但是,我们需要继续在这些领域中进行长期研究,以提高技术水平,并将最著名的最新技术应用于我们的工作中。

4.谈人工智能趋势

VentureBeat: What are some of the trends you expect to emerge, or milestones you think may be surpassed in 2020 in AI?

您认为在2020年人工智能领域会出现哪些趋势或里程碑?

Jeff Dean:I think we’ll see much more multitask learning and multimodal learning, of sort of larger scales than has been previously tackled. I think that’ll be pretty interesting.

And I think there’s going to be a continued trend to getting more interesting on-device models — or sort of consumer devices, like phones or whatever — to work more effectively.

I think obviously AI-related principles-related work is going to be important. We’re a big enough research organization that we actually have lots of different thrusts we’re doing, so it’s hard to call out just one. But I think in general [we’ll be] progressing the state of the art, doing basic fundamental research to advance our capabilities in lots of important areas we’re looking at, like NLP or language models or vision or multimodal things. But also then collaborating with our colleagues and product teams to get some of the research that is ready for product application to allow them to build interesting features and products. And [we’ll be] doing kind of new things that Google doesn’t currently have products in but are sort of interesting applications of ML, like the chip design work we’ve been doing.

我认为,在多任务学习和多模态学习方面会有突破,解决更多的问题。我觉得那会很有趣。

而且我认为,将会有越来越有效的设备(手机或其他类型的设备)出现,来让模型更有效地发挥作用。

我认为与AI相关的原理工作显然很重要。但对于谷歌来说,我们是一个足够大的研究机构,实际上我们正在做许多不同的工作,因此很难一一列举。

但总的来说,我们将进一步发展最先进的技术,进行基础研究,以提高我们在许多重要领域的能力,比如NLP、语言模型或多模态的东西。

同时,我们也会与我们的同事和产品团队合作,为产品应用做一些研究,使他们能够构建有趣的功能和产品。

英文采访原文链接:

https://venturebeat.com/2019/...

你可能感兴趣的:(机器学习)