关于人工智能中意识辩论的奇特理论

重点 (Top highlight)

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

我最近开始了一份有关AI教育的新时事通讯。 TheSequence是无BS(意味着没有炒作,没有新闻等),它是专注于AI的新闻通讯,需要5分钟的阅读时间。 目标是让您了解机器学习项目,研究论文和概念的最新动态。 请通过以下订阅尝试一下:

I was recently having a debate about strong vs. weak AI with one of my favorite new thinkers in this market and it reminded me of something that I wrote over a year ago. So I decided to dust it off and restructure those thoughts in a new article.

最近,我与这个市场上我最喜欢的新思想家之一就强弱AI进行了辩论,这让我想起了一年多以前写的东西。 因此,我决定将其清除,并在一篇新文章中重组这些想法。

With all the technological hype about artificial intelligence(AI), I find it sometimes healthy to go back to its philosophical roots. From all the philosophical debates surrounding AI, none is more important that the weak vs. strong AI problem. From a technological standpoint, I subscribe to the idea that we are one or two breakthroughs away from achieving some form of strong or general AI. However, from a philosophical standpoint there are still several challenges that need to be reconciled. Many of those challenges can be explained by an obscure theory pioneered by an Austro-Hungarian mathematician in the last century and by one of the leading areas of research in neuroscience.

有了关于人工智能(AI)的所有技术炒作,我发现有时回到其哲学根源是健康的。 在围绕AI的所有哲学争论中,没有哪个比强AI问题更重要了。 从技术角度来看,我赞成这样的想法,即与实现某种形式的强大或通用AI相比,我们是一两个突破。 但是,从哲学的角度来看,仍然存在一些需要调和的挑战。 这些挑战中的许多可以通过上世纪奥匈帝国数学家开创的晦涩理论以及神经科学研究的主要领域之一来解释。

In AI theory, weak AI is often associated with the ability of systems to appear intelligent while strong AI is linked to the ability of machines to think. By thinking I mean really thinking and not just simulated thinking. This dilemma is often referred to as the “Strong AI Hypothesis”.

在AI理论中,弱AI通常与系统显得智能的能力相关,而强AI与机器的思考能力相关。 思考是指真正的思考,而不仅仅是模拟的思考。 这种困境通常被称为“强大的AI假设”。

In a world exploring with digital assistants and algorithms beating GO World Champions and Dota2 teams, the question of whether machines can act intelligently seems silly. In constrained environments (ex: medical research, GO, travel) we have been able to build plenty of AI systems that can act as it they were intelligen6. While most experts agree that weak AI is definitely possible, there is still tremendous skepticism when comes to strong AI.

在一个用数字助理和算法击败GO世界冠军和Dota2团队的世界中,机器是否可以智能地行动似乎很愚蠢。 在受限的环境中(例如:医学研究,GO,旅行),我们已经能够构建大量可以发挥其智能作用的AI系统6。 虽然大多数专家都认为弱AI绝对是可能的,但对于强AI仍然存在极大的怀疑。

机器可以思考吗? (Can Machines Think?)

These questions have hunted computer scientists and philosophers since the publication of Alan Turing’s famous paper “Computing Machinery and Intelligence” in 1950. The question also seems a bit unfair when most scientists can’t even agree on a formal definition of thinking.

自从艾伦·图灵(Alan Turing)在1950年发表著名论文《计算机与智能》以来,这些问题就一直困扰着计算机科学家和哲学家。当大多数科学家甚至无法就思维的正式定义达成共识时,这个问题似乎也不公平。

To illustrate the confusion around the strong AI hypothesis, we can use some humor from the well-known computer scientists Edsger Dijkstra who in a 1984 paper compared the question of whether machines can think with questions such as “can submarines swim?” or “can airplanes fly?”. While those questions seem similar, most English speakers will agree that airplanes can, in fact, fly but submarines can’t swim. Why is that? I’ll leave that debate to you and the dictionary ;) The meta-point of this comparison is that without a universal definition of thinking it seems irrelevant to obsess about whether machines can think .

为了说明围绕强大AI假设的困惑,我们可以借鉴著名计算机科学家Edsger Dijkstra的幽默,他在1984年的一篇论文中将机器是否可以思考的问题与诸如“潜艇会游泳吗?”之类的问题进行了比较。“飞机能飞吗?” 。 尽管这些问题看起来很相似,但大多数说英语的人都会同意,飞机实际上可以飞行,但潜艇不会游泳。 这是为什么? 我将把辩论留给您和字典;)比较的基本点是,如果没有对思维的统一定义,那么似乎对痴迷于机器是否可以思考lev无关紧要。

Among the main counterarguments to strong AI states that, essentially, it might be impossible to determine is machine can really think. This argument has its basics in one of the most famous mathematical theorems of all time.

在对强大AI的主要反对意见中,从本质上讲,不可能确定机器是否能够真正思考。 该论证有史以来最著名的数学定理之一。

哥德尔不完备定理 (Gödel Incompleteness Theorem)

When we talk about greatest mathematical theories in history which have had a broad impact in our way of thinking, we need to reserve a place for Gödel’s Incompleteness Theorem. In 1931, mathematician Kurt Gödel demonstrated that deduction has its limits by proving his famous incompleteness theorem. Gödel ’s theorem states that any formal theory strong enough to do arithmetic (such as AI) there are true statements that have no proof within that theory.

当我们谈论历史上最伟大的数学理论对我们的思维方式产生了广泛的影响时,我们需要为哥德尔的不完全性定理保留一席之地。 1931年,数学家库尔特·哥德尔(KurtGödel)通过证明其著名的不完全性定理证明了演绎法的局限性。 哥德尔定理指出,任何形式的理论都足以进行算术运算(例如AI),都存在着在该理论内没有证据的真实陈述。

The incompleteness theorem has long been used as an objection to strong AI. The proponents of this theory argue that strong AI agents won’t be able to really think because they are limited by the incompleteness theorem while human thinking is clearly not. That argument has sparked a lot of controversy and has been rejected by many strong AI practitioners. The most used argument by the strong AI school is that it is impossible to determine if human thinking is subjected to Gödel ’s theorem because any proof will require formalizing human knowledge which we know to be impossible.

长期以来,不完全性定理一直被用作反对强人工智能的对象。 该理论的支持者认为,强大的AI主体将无法真正思考,因为它们受到不完全性定理的限制,而人类思维显然不受限制。 该论点引发了很多争议,并被许多强大的AI从业者拒绝。 强大的AI学派最常使用的论点是,不可能确定人类的思维是否服从于哥德尔定理,因为任何证据都需要形式化人类知识,而我们知道这是不可能的。

意识论证 (The Consciousness Argument)

My favorite argument in the strong AI debate is about consciousness. Can machines really think or just simulate thinking? If machines are able to think in the future that means that they will need to be conscious (meaning aware of its state and actions) as consciousness is the cornerstone of human thinking.

在激烈的AI辩论中,我最喜欢的论点是意识。 机器真的可以思考还是可以模拟思考? 如果机器能够在将来进行思考,这意味着它们将需要保持意识(即意识其状态和动作),因为意识是人类思维的基石。

The skepticism about strong AI has sparked arguments ranging from classic mathematical theory such as Gödel’s Incompleteness Theorem to pure technical limitations of AI platforms. However, the main are o debate remains on the intersection of biology, neuroscience and philosophy and has to do with the consciousness of AI systems.

对强大AI的怀疑引发了各种争论,从经典的数学理论(例如Gödel的不完全性定理)到AI平台的纯粹技术局限性不等。 然而,生物学,神经科学和哲学的交集仍然是争论的焦点,并且与人工智能系统的意识有关。

什么是意识? (What is Consciousness?)

There are many definitions and debates about consciousness. Certainly, enough to dissuade most sane people to pursue the argument of its role in AI systems ;) Most definitions of consciousness involve self-awareness or the ability for an entity to be aware of its mental states. Yet, when it comes to AI, self-awareness and metal states and not clearly defined either so we can quickly start going down a rabbit hole.

关于意识有很多定义和争论。 当然,足以诱使大多数理智的人继续追求其在AI系统中的作用的论点;)意识的大多数定义都涉及自我意识或实体了解其心理状态的能力。 但是,当涉及到AI时,自我意识和金属状态也没有明确定义,因此我们可以快速开始尝试。

In order to be applicable to AI, a theory of consciousness needs to be more pragmatic and technical and less, let’s say, philosophical. My favorite definition of consciousness that follows these principles comes from the laureate physicist Michio Kaku, professor of theoretical physics at University of New York and one of the creators of string theory. A few years ago, Dr. Kaku presented what he called the “space-time theory of consciousness” to bring together the definition of consciousness from fields such as biology and neuroscience. In his theory, Dr. Kaku defines consciousness as follows:

为了适用于AI,意识理论需要更加务实和技术化,而要少说一些哲学性的知识。 我最喜欢遵循这些原则的意识定义来自获奖者物理学家Michio Kaku ,他是纽约大学理论物理教授,也是弦理论的创造者之一。 几年前,Kaku博士提出了他所谓的“意识的时空理论”,将生物学和神经科学等领域对意识的定义汇总在一起。 卡库博士在其理论中对意识的定义如下:

“Consciousness is the process of creating a model of the world using multiple feedback loops in various parameters (ex: temperature, space, time , and in relation to others), in order to accomplish a goal( ex: find mates, food, shelter)”

“意识是使用各种参数(例如温度,空间,时间以及与他人有关的参数)的多个反馈回路创建世界模型的过程,以实现目标(例如:寻找伴侣,食物,住所)”

The space-time definition of consciousness is directly applicable to AI because it is based on the ability of the brain to create models of the world based not only on space (like animals) but in relationship to time (backwards and forwards). From that perspective, Dr. Kaku defines human consciousness as “a form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future.” In other words, human consciousness is directly related with our ability to plan for the future.

意识的时空定义直接适用于AI,因为它基于大脑根据空间(如动物)以及与时间的关系(后退和前进)创建世界模型的能力。 从这个角度来看,卡库博士将人类意识定义为“一种意识形式,它可以创建世界模型,然后通过评估过去来模拟未来,从而及时地对其进行模拟。” 换句话说,人类意识与我们对未来的计划能力直接相关。

In addition to its core definition, the space-time theory of consciousness includes several types of consciousness:

除了其核心定义外,意识的时空理论还包括几种意识类型:

· Level 0: Includes organisms such as plants with limited mobility which create a model of its space using a handful of parameters such as temperature.

· 0级:包括行动不便的植物等生物,它们使用诸如温度之类的少数参数创建其空间模型。

· Level 1: Organisms like reptiles that are mobile and have a nervous system. These organisms use many more additional parameters to form a model of its space.

· 级别1:像爬行动物这样的生物,它们活动并且具有神经系统。 这些生物使用更多其他参数来形成其空间模型。

· Level II: Organisms such as mammals that create models of the world not only based on space but in relation to others.

· II级:诸如哺乳动物之类的生物不仅建立在空间基础上,而且还建立在与他人有关的世界模型上。

· Level III: Humans which understand relation to time and have a unique ability to imagine the future.

· 第三级:了解时间关系并具有想象未来的独特能力的人类。

AI系统有意识吗? (Are AI Systems Conscious?)

Consciousness is one of the most passionate subjects of debate in the AI community. By AI consciousness, we are referring to the ability of an AI agent to be self-aware of its “mental state”. The previous part of this essay introduced a framework pioneered by known physicist Dr. Michio Kaku to evaluate consciousness in four different levels.

意识是AI社区中最激烈的辩论主题之一。 通过AI意识,我们指的是AI代理能够自我意识到其“心理状态”的能力。 本文的前一部分介绍了一个由著名物理学家Michio Kaku博士率先提出的框架,用于从四个不同的层面评估意识。

In Dr. Kaku’s theory, Level 0 consciousness describes organisms such as plants that evaluate their reality based on a handful of parameters such as temperature. Reptiles and insects exhibit Level 1 consciousness as they create models of the world using new parameters including space. Level 2 consciousness involves creating models of the world based on emotions and the relationship to other species. Mammals are the main group associated with Level 2 consciousness. Finally, we have humans that can be classified at Level 3 consciousness based on models of the world that involve simulations of the future.

在卡库博士的理论中,“ 0级意识”描述了诸如植物之类的生物,它们根据诸如温度之类的一些参数评估其真实性。 爬行动物和昆虫在使用包括空间在内的新参数创建世界模型时表现出1级意识。 2级意识涉及根据情感和与其他物种的关系创建世界模型。 哺乳动物是与2级意识相关的主要人群。 最后,我们可以根据涉及未来模拟的世界模型将人类分类为3级意识。

Based on Dr’ Kaku’s consciousness framework we can evaluate the level of consciousness of the current generation of AI technologies. Most experts agree that AI agents today can be classified at Level 1 or very early Level 2 consciousness. Ranking AI agents at Level 1 involves many factors including mobility. Many AI agents today have been able to achieve mobility and develop models of their environment based on the space around them. However, most AI agents have a lot of difficulty operating outside their constrained environment.

基于Kaku博士的意识框架,我们可以评估当前一代AI技术的意识水平。 大多数专家都认为,当今的AI代理可以分为1级或2级早期意识。 将AI代理排名为1级涉及许多因素,包括流动性。 如今,许多AI代理已经能够实现移动性并根据其周围的空间开发其环境模型。 但是,大多数AI代理在受限的环境之外进行操作都有很多困难。

Space evaluation is not the only factor placing AI agents at Level I consciousness. The number of feedback loops used to create models is another super important factor to consider. Let’s use image analysis as an example. Even the most advanced vision AI algorithms use a relatively number of small number of feedback loops to recognize objects. If we compare those models with the cognitive abilities and insects and reptiles they seem rather unsophisticated. So yes, the current generation of AI technologies has the level of consciousness of an insect ;)

空间评估并不是将AI代理置于I级意识的唯一因素。 用于创建模型的反馈回路的数量是要考虑的另一个超重要因素。 让我们以图像分析为例。 甚至最先进的视觉AI算法也使用相对少量的反馈回路来识别对象。 如果我们将这些模型与认知能力,昆虫和爬行动物进行比较,它们似乎并不复杂。 所以是的,当前的人工智能技术具有昆虫的意识;)

进入二级 (Getting to Level II)

Steadily, some AI technologies have been exhibiting characteristics of Level 2 consciousness. There are several factors contributing to that evolution. AI technologies are getting more advanced understanding and simulating emotions as well as perceiving emotional reactions around them.

稳定地,一些AI技术已经展现出2级意识的特征。 有几个因素促成这一发展。 人工智能技术正在获得更高级的理解和模拟情绪,以及感知周围的情绪React。

In addition to the evolution of emotion-based AI techniques, AI agents are getting more efficient operating in group environments on which they need to collaborate or compete among each other in order to survive. In some cases, the group collaboration has even resulted on the creation of new cognitive skills. To see some recent examples of AI agents that have exhibited Level 2 consciousness we can refer to the work of companies such as DeepMind and OpenAI.

除了基于情感的AI技术的发展外,AI代理在团体环境中的运行效率也越来越高,在团体环境中,他们需要相互协作或竞争才能生存。 在某些情况下,小组合作甚至可以创造出新的认知技能。 要查看一些具有2级意识的AI代理的最新示例,我们可以参考DeepMind和OpenAI等公司的工作。

Recently, DeepMind conducted experiments on which AI agents needed to live in an environment with limited resources. The AI agents showed different behaviors when resources were abundant than when they were scarce. The behavior changed as the agents needed to interact with each other. Another interesting example can be found on a recent OpenAI simulation experiment on which AI agents were able to create their own language using a small number of symbols in order to better coexist in their environment. Pretty cool huh?

最近,DeepMind进行了一些实验,其中需要AI代理住在资源有限的环境中。 当资源丰富时,AI代理表现出与稀有时不同的行为。 由于代理需要相互交互,因此行为发生了变化。 在最近的OpenAI模拟实验中可以找到另一个有趣的示例,在该实验中,AI代理能够使用少量符号创建自己的语言,以便更好地在其环境中共存。 太酷了吧?

There are still very early days of mainstream AI solutions but enhancing the level of consciousness of AI agents is one of the most important goals of the current generation of AI technology stacks. Level 2 consciousness is the next frontier!

主流AI解决方案仍处于早期阶段,但提高AI代理的意识水平是当前AI技术堆栈中最重要的目标之一。 2级意识是下一个领域!

升至第三级 (Getting to Level III)

At the moment Level III consciousness in AI systems is still an active subject of debate. However, recent systems such as OpenAI Five or DeepMind Quake III have clearly shown the ability of AI agents of long term planning and collaboration so we might not be that far off.

目前,人工智能系统中的III级意识仍然是一个活跃的辩论主题。 但是,最近的系统(例如OpenAI Five或DeepMind Quake III)已经清楚地显示了AI代理进行长期计划和协作的能力,因此我们可能相距不远。

AI系统有意识吗? (Are AI Systems Conscious?)

The short, and maybe surprising answer, is YES. Applying Dr. Kaku’s space-time theory of consciousness to AI systems, it is obvious that AI agents can exhibit some basic forms of consciousness . Factoring the capabilities of the current generation of AI technologies, I would place the consciousness of AI agents at Level I (reptiles) or basic Level II.

简短且可能令人惊讶的答案是肯定的。 将Kaku博士的时空意识理论应用于AI系统,很明显AI主体可以展现出一些基本的意识形式。 考虑到当前AI技术的能力,我将AI代理的意识置于I级(爬行动物)或II级基本水平。

翻译自: https://medium.com/swlh/a-curious-theory-about-the-consciousness-debate-in-ai-ad0bcf5c8e81

你可能感兴趣的:(人工智能,python)