人工智能研究的内容:
by Elena Nisioti
由Elena Nisioti
Artificial Intelligence is moving fast. The vibe is all around. Facts are beginning to sound like science fiction movies and science fiction movies like a version of reality (with better graphics). It may be that AI has finally achieved the level of maturity it has been pursuing for decades and was stubbornly denied, making parts of its community, and the whole world, suspicious against its feasibility.
人工智能发展Swift。 气氛到处都是。 事实开始听起来像科幻电影,而科幻电影则像现实版本(具有更好的图形)。 也许人工智能终于达到了它已经追求了几十年的成熟水平,并被顽固地拒绝了,这使得它的一部分社区以及整个世界都对它的可行性感到怀疑。
Frankenstein may contain parallels relevant to the present day. Mary Shelley’s Gothic novel contains a discussion on the consequences of creating and introducing an artificial being into society. The Creature puzzles us with its inhuman atrocity and yet human manifestations of weakness, need for companionship and existential crisis.
科学怪人可能包含与当今有关的相似之处。 玛丽·雪莱(Mary Shelley)的哥特式小说中,讨论了将人造人创造并引入社会的后果。 生物以其非人道的残暴行为以及人类的软弱表现,需要陪伴和生存危机感到困惑。
One could say that we should focus on the future and the consequences of our discoveries. But how can one focus on the chaos created by injecting an army of Creatures into a system so complicated as contemporary society? One could also focus on the achievements, the success-stories that made these ideas sound veracious. But how can one ex post discriminate between correct intuition and luck?
可以说我们应该关注未来以及发现的后果。 但是,如何将一队生物部队注入像当代社会这样复杂的系统中所造成的混乱呢? 人们还可以关注成就,成功故事,这些故事听起来很真实。 但是, 事后如何区分正确的直觉和运气呢?
It takes self-restraint, and wisdom, to set aside for a while the branches of your work and evaluate the firmness of its roots. A blooming tree can be distracting.
花费一段时间的自我克制和智慧,评估工作的分支并评估其根源的牢固性。 开花的树可能会分散注意力。
Whether you are tracing the rules of logical thinking in the ancient Greek philosophers, the formulation of reasoning in Arabic mathematicians or the power of mathematical knowledge in 19th century intellectuals — one unsettling notion becomes clear: the questions are deeper than the networks you can design (even taking Moore’s law into account).
无论您是在追寻古希腊哲学家的逻辑思维规则,还是在阿拉伯数学家中进行推理,还是在19世纪知识分子中运用数学知识的力量,一个令人不解的观念都变得显而易见:一个问题比您可以设计的网络更深层次(甚至考虑到摩尔定律 )。
“I believe that what we become depends on what our fathers teach us at odd moments, when they aren’t trying to teach us. We are formed by little scraps of wisdom.”
“我相信,我们成为什么样的人,取决于我们父亲在不愿教我们的时候,在奇怪的时刻教给我们什么。 我们是由一点点智慧组成的。”
Umberto Eco
翁贝托生态
The rest of the discussion will emerge from the history of AI. Not the history of achievements, but the history of questions, arguments and beliefs of some significant individuals. Most of the events revolve around the ‘60s, the era AI acquired its official definition, its purpose, its scientific community and its opponents.
剩下的讨论将来自AI的历史。 不是成就的历史,而是一些重要人物的问题,争论和信仰的历史。 大多数事件都围绕着20世纪60年代,即AI时代获得了官方定义,宗旨,科学界和反对者。
In 1950 Alan Turing attempts to answer this purposely simplistically-expressed, question in his seminal paper Computing Machinery and Intelligence. Acknowledging its ambiguity and the limits it imposes on understanding AI, he proceeds by formulating a thought-experiment, also known as the Turing test:
1950年,艾伦·图灵(Alan Turing)试图在他的开创性论文《 计算机械与智能》(Computer Machinery and Intelligence)中回答这个有目的的简单表达的问题。 他认识到其模棱两可及其对理解AI的局限性,因此他提出了一项思想实验,也称为图灵测验:
Player A is a man, player B is a woman and player C is of either sex. C plays the role of the interrogator and is unable to see either player, but can communicate with them by means of impersonal notes. By asking questions to A and B, C tries to determine which of the two is the man and which is the woman. A’s role is to trick the interrogator into making the wrong decision, while B attempts to assist the interrogator in making the right one.
玩家A是男人,玩家B是女人,玩家C是性别。 C扮演审问者的角色,虽然看不见任何一个玩家,但可以通过非个人音符与他们交流。 通过向A和B提问,C试图确定两者中的哪个是男人,哪个是女人。 A的作用是欺骗询问者做出错误的决定,而B试图协助询问者做出正确的决定。
The reformulated question is then:
重新制定的问题是:
What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often as he does when the game is played between two humans?
当机器在游戏中加入A的部分时,会发生什么? 在两个人之间进行游戏时,询问器是否会像他经常做出错误的决定?
Turing’s approach seems to follow the doctrine of the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
图灵的方法似乎遵循了鸭子测试的学说:如果它看起来像鸭子,像鸭子一样游泳,而像鸭子一样嘎嘎叫,那可能就是鸭子。
His attitude when it comes to “human” aspects of intelligence, such as consciousness, is that you can’t blame someone (or something) for not possessing a characteristic that you have yet to define. Thus, consciousness is irrelevant in our quest for AI.
当谈到智力的“人类”方面(例如意识)时,他的态度是,您不能责怪某人(或某物)不具备您尚未定义的特征。 因此,意识与我们对AI的追求无关。
Gödel’s incompleteness theorems were an obstacle in one’s attempt to talk about AI. According to them mathematical logic cannot be both complete and consistent, thus, machines equipped with mathematical logic to learn, as is the case of AI, are expected to fail in learning some truths. Turing’s answer to this is fairly disarming: how do you know that human-intellect does not also come with its limitations?
哥德尔的不完全性定理是人们谈论人工智能的障碍。 根据他们的观点,数学逻辑不可能既完整又一致,因此,配备有用于学习的数学逻辑的机器(如AI的情况)将无法学习某些真理。 图灵对此的回答是相当令人振奋的:您如何知道人的智力还没有局限性?
Turing’s paper is lavish both in terms of arguments and a clear, dialectical structure, nevertheless constrained in speculations about technologies that had yet to be discovered.
Turing的论文既有论据,又有清晰,辩证的结构,尽管如此,但对于尚未发现的技术的猜测却受到了限制。
Marvin Minsky was one of the fathers of AI as a research field. In the dusty album of AI family photos, Minsky would be this old man that brings some uneasiness to a family dinner: “Old uncle Minsky. He was charmingly peculiar and always had something interesting to say”.
Marvin Minsky是AI的研究之父之一。 在AI家庭照片的尘土飞扬的相册中,明斯基就是这个老人,给家庭晚餐带来一些不安:“老叔叔明斯基。 他很奇特,总是有话要说。”
Minsky was one of the organisers of the Dartmouth Conference in 1956, where Artificial Intelligence was first defined as a term and a field. He is mostly remembered for his vigour of believing that AI is feasible and his depreciation in pursuing it with wrong means.
明斯基是1956年达特茅斯会议的组织者之一,人工智能在这里最初被定义为一个术语和领域。 他因坚信AI是可行的以及以错误的方式追求AI的贬值而倍受赞誉。
Let’s see what Minsky had to say in 1961, when he was asked about the so far progress in AI.
让我们看看Minsky在1961年,当他被问及AI迄今为止的进展时,他该怎么说。
Should we ask what intelligence “really is”? My own view is that this is more of an aesthetic question, or one of sense of dignity, than a technical matter! To me “intelligence” seems to denote little more than the complex of performances which we happen to respect, but do not understand. So it is, usually, with the question of “depth” in mathematics. Once the proof of a theorem is really understood, its content seems to become trivial.
我们是否应该问“真正的情报”是什么? 我个人的观点是,这不仅仅是技术问题,更是审美问题或尊严感之一! 对我而言,“智能”似乎仅代表我们碰巧尊重但不理解的表演的复杂性。 通常,数学的“深度”问题就是这样。 一旦真正理解了一个定理的证明,它的内容就显得微不足道了。
Acknowledging the inherent difficulties in defining AI, and thus pursuing it, Minsky begins by setting the building pillars of it. According to him, these are search, pattern-recognition, learning, planning, and induction.
明斯基认识到定义AI所固有的困难,因此在追求AI时,首先确定了AI的构建Struts。 据他介绍,这些是搜索,模式识别,学习,计划和归纳。
If the ultimate purpose of the program is to search and find solutions of its own, then pattern-recognition can help it recognise the appropriate tools, learning can help it improve through experience, and planning can lead to more efficient exploration. As regards the possibility of making a machine with inductive abilities and thus reasoning, Minsky has to say:
如果该程序的最终目的是搜索和找到自己的解决方案,那么模式识别可以帮助它识别适当的工具,学习可以通过经验来帮助其改进,而规划可以导致更有效的探索。 关于制造具有归纳能力并因此进行推理的机器的可能性,明斯基不得不说:
Now [according to Gödel’s incompleteness theorem], there can be no system for inductive inference that will work well in all possible universes. But given a universe [our world], or an ensemble of universes, and a criterion of success, this (epistemological) problem for machines becomes technical rather than philosophical.
现在[根据哥德尔不完备性定理],不可能有一个可以在所有可能的宇宙中都很好工作的归纳推理系统。 但是给定一个宇宙(我们的世界)或宇宙的整体,以及成功的标准,机器的这个(认识论)问题就变成了技术而不是哲学上的问题。
The rest of the text contains a recurrent urge to clarify that the pursuit of AI should be conducted through complex, hierarchical architectures. For this reason he questions the perceptron approach, as it will fail for moderately difficult problems. And frankly, we can’t expect reality to be simplistic.
文本的其余部分包含反复提出的冲动,以澄清对AI的追求应通过复杂的分层体系结构进行。 因此,他对感知器方法提出了质疑,因为它会因中等难度的问题而失败。 坦率地说,我们不能指望现实会变得简单。
Minsky can be attributed the responsibility of discouraging research in perceptrons, which probably delayed the bloom of deep learning. The realisation that, even using simple building blocks, one can solve complicated problems by going into deep architectures seems to have escaped his, nevertheless, ingenious insight.
明斯基可以归因于阻止感知器研究的责任,这可能延迟了深度学习的兴起。 即使使用简单的构建块,也可以通过进入深层结构来解决复杂的问题,这似乎逃脱了他的(尽管如此)他的巧妙见解。
Yet his remarks can be seen as ultimately constructive criticism, as they helped the community explore the weaknesses of the original approaches. Also, deep learning may be the best we have so far (and how marvellous the applications), but should not be regarded unconditionally as the Holy Grail of AI.
然而,他的言论最终可以被视为建设性的批评,因为它们帮助社区探索了原始方法的弱点。 此外,深度学习可能是迄今为止我们拥有的最好的应用程序(以及应用程序多么出色),但不应无条件地将其视为AI的圣杯。
In 1980 John Searle got angry. Although he probably got angry earlier, this is the moment he decided to publicise his disagreement with strong AI. Indeed, even the title sounds sarcastic. I feel like Searle is grabbing me by the collar and, vigorously waving his finger, saying: “Let me help you make some fundamental distinctions young lad”.
1980年,约翰·塞尔(John Searle)生气了。 尽管他可能早些时候就生气了,但这是他决定宣布自己对强大AI的分歧的那一刻。 确实,甚至标题也听起来很讽刺。 我感觉像Searle抓住我的衣领,用力挥舞着他的手指说:“让我帮助你给年轻小伙子一些根本的区别”。
“One gets the impression that people in AI who write this sort of thing think they can get away with it because they don’t really take it seriously, and they don’t think anyone else will either. I propose for a moment at least, to take it seriously.”
“给人的印象是,从事此类工作的AI人士认为他们可以摆脱它,因为他们并没有真正重视它,并且他们也不认为其他人也可以。 我至少建议片刻,认真对待。”
Searle is solely attacking the notion of strong AI, which he identifies as the capability of a computer to practice any human-like behaviour. He translates this to the ability of a machine to demonstrate consciousness which he disproves by analogy. His famous thought-experiment, the Chinese room, goes like this:
塞尔(Searle)只是在攻击强大的AI概念,他将AI定义为计算机能够执行任何类似于人类的行为的能力。 他将其转化为机器表现出意识的能力,而这种能力被类比证明了。 他著名的思想实验,中国室是这样的:
You are a monolingual English speaker locked in a room with the following things: a large batch of Chinese writing (called a script), another large batch of Chinese writing (called a story) and a set of English rules instructing you how to match Chinese symbols of the second batch to the first (called a program). Then, you are given another batch of Chinese writing (this time called questions) and another set of English instructions with rules that match the questions to the other two batches. Congratulations, you just learned Chinese!
您是一个会讲英语的讲者,被关在一个房间里,并且要注意以下事项:大量的中文写作(称为剧本),大量的中文写作(称为故事)和一组英语规则,指导您如何匹配中文第二批到第一批的符号(称为程序)。 然后,为您提供另一批中文写作(这次称为问题)和另一套英语说明,其中包含将问题与其他两批内容相匹配的规则。 恭喜,您刚学了中文!
This is the Chinese Room experiment, introduced by Searle in 1980 . A thought experiment is not an experiment per se, as its goal is not be conducted, but to explore the potential consequences of an idea. The oldest and most famous is, probably, Galileo’s Leaning Tower of Pisa experiment (did you also think Galileo was actually dropping apples from the tower?).
这是锡尔(Searle)在1980年提出的中文室实验。 思维实验本身并不是实验,因为它的目的不是进行,而是探索想法的潜在结果。 最古老,最著名的可能是伽利略的比萨斜塔实验 (您是否还认为伽利略实际上是在从塔上滴下苹果?)。
Searle’s point is that the fact that you can produce Chinese answers by accepting Chinese questions does not mean that you understand Chinese, if this ability was created by following rules in another language. As a consequence, a machine that gives the expected output after it has been given the appropriate algorithm should not be considered a ‘thinking’ entity.
Searle的观点是,如果您通过遵循另一种语言的规则来创造这种能力,那么您可以通过接受中文问题来产生中文答案,这并不意味着您理解中文。 因此,在给出了适当算法后给出预期输出的机器不应被视为“思考”实体。
What Searle does not dispute is the ability of a program to think, as in terms of some functional reasoning. He accuses current AI researchers of being behaviouristic and operationalistic, as they attempt to equate a program with a mind (which is true), letting aside however the importance of the brain.
塞尔认为,程序的思考能力与某些功能推理有关。 他指责当前的AI研究人员行为主义和操作主义,因为他们试图将程序等同于思维(这是事实),却忽略了大脑的重要性。
According to him consciousness comes only from biological operations and since a program is totally independent from its implementation (as it can run on any hardware) it cannot exhibit consciousness.
据他说,意识仅来自生物操作,并且由于程序完全独立于其实现(因为它可以在任何硬件上运行),因此它无法表现出意识。
Reading the original text, one gets the feeling that he is attacking an immature community of computer scientists that has not bothered to reach a consensus on what intelligence is, nevertheless attempts to simulate it guided by teleological approaches and speculations.
阅读原始文本后,人们会感觉到他正在攻击一个尚未成熟的计算机科学家社区,该社区没有费心就什么是智能达成共识,尽管如此,还是试图通过目的论方法和推测来模拟它。
Minsky’s response to Searle, and philosophical approaches in general, is as nihilistic as it gets: “they misunderstand, and should be ignored”.
明斯基对塞尔的回应以及总体上的哲学方法是虚无的: “他们误解了,应该忽略” 。
And you should not make them feel bad about it. This paper, written by Rodney A. Brooks in 1990, is an attempt of a Nouvelle AI evangelist to persuade, employing both arguments and his robotic fleet, that the classical approach to AI should leave some space for his.
而且,您不应该让他们为此感到难过。 本文由Rodney A.布鲁克斯1990年写的,是一个中篇小说AI传道的企图说服,同时采用的论点和他的机器人舰队的经典方法AI应该留有一定的空间他。
To get the feeling of that era, AI was experiencing its second winter. Funding was cut as companies and governments realised that the community had set the expectations too high.
为了感受那个时代,AI正在经历第二个冬天。 由于公司和政府意识到社区对期望的期望过高,因此削减了资金。
So, time for introspection. When something fundamentally fails, there are two ways to go at it: either it’s impossible to achieve or your approach is flawed.
因此,需要进行内省。 当某件事从根本上失败时,有两种解决方法:要么无法实现,要么您的方法存在缺陷。
Brooks suggested that AI’s stagnation is due to its dogma of functional representations. The symbol system hypothesis is a long-standing view on how intelligence operates. According to it, the world involves entities, like people, cars and cosmic love, so it is natural to match them to symbols and feed machines with them. If this hypothesis is correct, then you have provided the machine with all the necessary information for it to “come up” with intelligence.
Brooks认为AI的停滞是由于其功能表示法的教条。 符号系统假说是关于情报如何运作的长期观点。 据此,世界涉及诸如人,汽车和宇宙之爱之类的实体,因此很自然地将它们与符号匹配并与它们一起喂食机器。 如果此假设正确,则您已向机器提供了所有必要的信息,以使其“智能”起来。
Although this assumption does not seem problematic, it has some far-reaching consequences that might account for the bad performance of AI:
尽管此假设似乎没有问题,但它会产生一些深远的后果,可能会导致AI的不良表现:
The symbol system is not adequate to describe the world. According to the frame problem it is a logical fallacy to assume anything that is not explicitly stated. To this point, Brooks charmingly suggests: why not take the world as its own model?
符号系统不足以描述世界。 根据框架问题 ,假设任何未明确说明的内容都是逻辑上的谬误。 至此,布鲁克斯迷人地暗示:为什么不将世界作为自己的模型?
Brook’s counterproposal is the physical grounding hypothesis. That is, allow Artificial Intelligence to directly interact with the world and use it as its own representation. This certainly changes the standard practise of AI: from learning requiring immense computational resources, guidance from experts and a never-satisfied need for training data, Brook suggests equipping physical entities with cheap hardware and unleashing them in the world. But does this underestimate the problem?
布鲁克的反对建议是物理基础假设 。 也就是说,允许人工智能直接与世界互动并将其用作自己的表示。 无疑,这改变了AI的标准做法:从需要大量计算资源的学习,专家的指导以及对培训数据的从未满足的需求出发,Brook建议为物理实体配备廉价的硬件,并在世界范围内释放它们。 但这是否低估了问题?
Brooks sees intelligence rising from collective behaviour, not sophisticated parts. Perhaps the most profound observation of his experiments regards how “goal-directed behaviour emerges from the interactions of simpler non goal-directed behaviours”. There does not need to exist a predetermined coordination pattern, as an intelligent machine should draw its own strategies to optimally interact with the world.
布鲁克斯(Brooks)认为,智力是从集体行为而不是复杂的部分中获得的。 对他的实验最深刻的观察也许是关于“目标行为是如何从更简单的非目标目标行为的相互作用中产生的” 。 无需存在预定的协调模式,因为智能机器应制定自己的策略以与世界进行最佳交互。
Brook’s argument of evolution draws a long way along persuading us of the importance of the physical ground hypothesis: humans are the most common and closest example we have to intelligence. Thus, in our attempt to re-create this characteristic, isn’t it natural to observe the evolution, a slow, adaptive process that gradually led to the formulation of human civilisation? Now, if one considers the time it took us to evolve skills such as interacting, reproducing and surviving, in contrast to our still young abilities of using a language or playing chess, then one may reach the conclusion that these are the hardest skills to develop. So, why not focus on that?
布鲁克关于进化的论点在说服我们关于物理基础假说的重要性方面有很长的路要走:人类是我们最需要智慧的最常见和最接近的例子。 因此,在我们试图重现这一特征的过程中,观察进化的过程是自然的,这是一个缓慢的,适应性的过程,逐渐导致人类文明的形成吗? 现在,如果考虑到我们发展诸如互动,复制和生存之类的技能所花的时间,与我们尚不成熟的使用一种语言或下棋的能力相比,那么人们可能会得出这样的结论:这些是最难发展的技能。 那么,为什么不集中精力呢?
Although ecstatic about the practicality of his approach, Brook acknowledges its theoretical limitations, which can be attributed to the fact that we have yet to develop a complete understanding of the dynamics of interacting populations. Once more, the disregard of an engineer towards philosophical objections is evident:
尽管对他的方法的实用性欣喜若狂,但布鲁克承认其理论上的局限性,这可以归因于以下事实:我们尚未完全了解相互作用的种群动态。 再一次,工程师对哲学异议的漠视显而易见:
“At least if our strategy does not convince the arm chair philosophers, our engineering approach will have radically changed the world we live in.”
“至少如果我们的战略不能使扶手椅哲学家信服,那么我们的工程方法将彻底改变我们所生活的世界。”
Despite floating in a sea of questions, AI manifests something we cannot dispute: progress. Nevertheless, stripping current applications from the effect of technological advancements and heuristic advantages to acquire an accurate perception of the quality of the current research is a tedious task.
尽管存在很多问题,但AI体现了我们不可争议的东西:进步。 然而,从技术进步和启发式优势的影响中剥离当前的应用程序以获得对当前研究质量的准确认识是一项繁琐的任务。
Will deep learning prove a worthy tool towards pleasing our ever-demanding criteria of intelligence? Or is this another interglacial period before AI reaches winter again?
深度学习是否会成为一种令人愉悦的工具,可以取悦我们日益苛刻的智力标准? 还是这是AI再次进入冬季之前的又一个跨冰期?
What’s more, the concerns and questions have shifted from pure philosophical to social, as the consequences of AI in everyday life are becoming more obvious and pressing than the need for understanding consciousness, God and intelligence. Yet, this may be an even more difficult question to answer and urge us to dig even deeper.
此外,随着人工智能在日常生活中的后果变得比了解意识,上帝和智慧的需要更加明显和迫切,关注和问题已从纯粹的哲学转向社会。 然而,这可能是一个更加困难的问题,需要回答并敦促我们进一步深入。
When Wittgenstein wrote the Tractactus, he was confronted with the danger of a fundamental fallacy: his arguments fell victims to the doctrine of his work. That is, if one accepted his doctrine as true, his arguments were illogical and thus his doctrine should be false. But Wittgenstein thought differently:
维特根斯坦(Wittgenstein)撰写《论论》时,面临着根本谬论的危险:他的论点成为他工作理论的牺牲品。 也就是说,如果一个人接受了他的学说是正确的,那么他的论点是不合逻辑的,因此他的学说应该是错误的。 但是维特根斯坦却有不同的看法:
“My propositions are elucidatory in this way: he who understands me finally recognises them as senseless, when he has climbed out through them, on them, over them.”
“我的主张以这种方式是可以说明的:了解我的人终于在他通过它们,在它们之上,在它们之上爬过时,最终将它们视为毫无意义。”
To understand the truth behind a complicated idea, we need to evolve. We must stand firm on our previous step and be willing to abandon it. Not every step has to be correct, but it has to be understood. When later confronted with this argument, Wittgenstein said that he does not need a ladder, as he is capable of directly approaching the truth.
要了解复杂想法背后的真相,我们需要不断发展。 我们必须坚定地坚持上一步,并愿意放弃它。 并非每个步骤都必须正确,但必须理解它。 后来面对这种争论时,维特根斯坦说他不需要梯子,因为他有能力直接接近真理。
We may still need it.
我们可能仍然需要它。
翻译自: https://www.freecodecamp.org/news/deeper-ai-a104cf1bd04a/
人工智能研究的内容: