Can Computers Think? (Map 1 Issue 1) 计算机能思考吗?(图1问题1)

== Foreword by the translator ==

It's an extraordinarily wonderful collection of charts that demonstrate the history and the current status of the discussion on whether computers (particularly the digital algorithmic machines we have been using today, which is I guess the only computing structure human beings can so far build in a sophisticated manner). I‘ll just read it through as I translate it and hope I myself will also get some idea about this topic along the way.

The discussion is divided into 7 parts (maps) comprehensively talking about this issue in different sub-topics or from different perspectives, with a huge diagram that depicts the relationships between arguments (normally either supporting or disputing) presented in each of these maps.

Be aware that the disciplines covered by the discussion of such a subject as this (which I guess might be one of very few fundamental problems universe-wide) can naturally include but not be limited to, in a variety of depths and complexity, computing theory, system science, mathematics, physics (very advanced), biology, and philosophy (of different types concerning very fundamental issues). So it's absolutely all right to stop anywhere you feel uncomfortable.

This thing was first seen by me in the posters on the 2nd floor of ISE building of University of Canberra, and instantly raised my curiosity as it's a question I have long been thinking about. One great thing of it is it also exhibited how to make appropriate quotation and how important it is, which is mandated and valued by western academics and matched what I was struggling to get used to in the first semester. 

Comments by the translator might be made and attached to related arguments, but they are only in English.

== End of foreword ==


Issue Area: Can computers have free will

问题领域: 计算机能否具有自由意志


Link to the corresponding part of the map: http://www.macrovu.com/CCTWeb/CCT1/CCTMap1FreeWill.html


1. Alan Turing 1950 (widely recognized as the father of modern computers; Each map in this collection starts with his opinion to pay tribute to him -- translator)

阿兰·图灵 1950

Yes, Machines can (or will be able to) think. A computational system can possess all important elements of human thinking or understanding.

'I believe that at the end of the century ... one will be able to speak of machines thinking without expecting to be contradicted. (from Turing's point of view, i guess, 'without being contradicted' means being able to meet his criteria for determining if a machine can think/behave as a human does -- translator)

是的,机器能够(或将能够)思考。一个计算系统能够具有所有的人类思考或理解(过程或所需)的重要属性。

“我相信,在这个世纪末(二十世纪——编者) ... 人们能够谈论机器思考,而不会有任何矛盾”

2. (disputing 1)  Computers can't have free will. Machines only does what they have been designed and programmed to do. They lack free will, but free will is necessary for thought. Therefore, computers can't think.

Free will: The ability to make voluntary, unconstrained decisions. Freely made decisions are independent of the influence of such deterministic factors as genetics (nature) and conditioning (nurture)

(驳斥1) 计算机不能具有自由意志。机器只能做它们被设计或编程要做的事。他们缺乏自由意志,但自由意志是思考的必要条件。因此,计算机不能思考。

自由意志: 做出自愿、不受约束的决定的能力。自由作出的决定是不依赖于决定性因素如遗传基因(自然)和后天调节(培养)的。

3. (disputing 2) Humans also lack free will. Whether or not computers have will is irrelevant to the issue of whether computers can think. People can think, and they don't have free will. People are just as deterministic as machines are. So machine may yet be able to think.

(驳斥2) 人类也没有自由意志。计算机是否具有自由意志与计算机能否思考这个问题无关。人能够思考,但他们不具有自由意志。人和机器一样(至少类似——编者)都是受决定性因素支配的。因此机器仍有可能会思考。

4. (supporting 3) Ninian Smart, 1964, Humans are programmed. If you accept determinism, then you accept that nature has programmed you in certain way and in certain contexts, even though that programming is subtler than the programming a computer receives.

(支持3) Ninnian Smart, 1964, 人类也是被编程的。如果你接受决定论,那么你就会接受自然也对你以某种方式在某个特定环境下实施了“编程”,即使这种编程要比计算机编程要复杂(得多——编者)

5. (supporting 3) Free will is an illusion of experience. We may think we are free, but that is just an illusion of experience. Actually, we are determined to do what we do by our underlying  neural machinery.

According to the modern scientific view, there is simply no room at all for "freedom of the human will" (1986, p. 306)Marvin Minsky

Human beings are slaves of brute matter, compelled to act in particular ways by virtual of biochemical and neuronal factors. What we see is the illusory nature of free will (1985, p. 109) Geoff Simons

(支持3) 自由意志是经验的幻觉。我们可能觉得我们是自由的,然而这只是经验的幻觉。事实上,我们底部的神经机制决定了我们会做什么。

根据现代科学观点,“人类的意志自由”几乎没有可能。(1986,306页) Markin Minsky

人类是(生物性?——编者)蛮力的奴隶,被迫使以特定方式借助生物化学和神经机能因素行动。我们所见到的是自由意志的虚幻特性。(1985,109页) Geoff Simons

6. (disputing 2) Philip Johnson-Laird, 1988a, Free will results from a multilevel representational structure. A multilevel representational structure is capable of producing free will. The system must have levels for:
- representing options for action (e.g. go to dinner, reader, take a walk);
- representing the grounds for deciding which option to take (e.g., choose the one that makes me happy, choose by flipping a coin);
- representing a method for deciding which decision-making process to follow (e.g. follow the most 'rational' method, follow the fastest method).
Computers that have been programmed with such multilevel structures can exhibit free will.

(反驳2) Philip Johnson-Laird,1988a,自由意志是一个多层表示结构。一个多层表示结构有能力产生自由意志。这个系统必须含有以下层:
- 表示行动选项 (例如:去吃饭,看书,散步);
- 表示决定选择哪个选项的考量 (例如:选择使我高兴的选项,或掷个硬币);
- 表示选择哪个决策过程的方法 (例如:选择最理性的方式,或选择最快决策方法)

7. (Disputing 2) Geoff Simon, 1985, Free will is a decision-making process.Free will is a decision-making process characterized by selection of options, discrimination between clusters of data, and choice between alternatives. Because computers already make such choices, they possess free will. (Too naive? -- translator)

(反驳2) Geoff Simon, 1985, 自由意志是一个决策过程。自由意志是一个由选项选择,数据区分以及多可选项选择所表征的决策过程。由于计算机已经能做这些选择,它们具有自由意志。

8. (Supporting 7) Geoff Simons, 1985, Conditional jumps constitute free will.The ability of a system to perform conditional jumps when confronted with changing information gives it the potential to make free decisions. For example, a computer may or may not "jump" when it interprets the instruction "proceed to address 9739 if the contents of register A are less than 10." The decision making that results from this ability frees the machine from being a mere puppet of the programmer.

(支持7) Geoff Simons,1985,条件跳转构成自由意志。系统在遇到变化的信息执行条件跳转的能力使他具有自由决策的潜能。例如,一个计算机可能在解释指令“如果寄存器A的内容小于10的时候则”的时候去跳转(或不跳转)。由这样的能力所导致的决策将使机器从程序员的玩偶中解放出来。

9. (Disputing 2) Alan Turing, 1951, Machines can exhibit free will by way of random election.Free will can be produced in a machine that generates random values, for example, by sampling random noise. (That's what i used to think and still half believe -- translator)

(反驳2) 阿兰·图灵,1951,机器能够通过随机选择实现自由意志。自由意志能通过一个能产生随机数的机器制造出来,例如,通过采样随机噪声。

10. (Supporting 9) Jack Copeland, 1993, Free will arises from random selection of alternatives in nil preference situations.When an otherwise deterministic system makes a random choice in a nil preference situation, that system exhibits free will. A nil preference situation is one in which an agent must choose between a variety of equally preferred alternatives (for example, whether to eat one orange or another from a bag of equally good oranges). The available alternatives may have arisen from deterministic factors, but "when the dice roll", the choice is made freely.

(支持9) Jack Copeland,1993,自由意志从均偏好情形的随机选择中产生。当一个通常决定性的系统对一个均偏好情形作出随机选择的时候,这个系统就展示出了自由意志。在均偏好情形中一个工作者必须从一组优势等同的选项中做出选择(例如一堆完全相同的橘子)。这些可选选项可能从决定性过程中产生,但“一旦掷了骰子”,这个决定就自由作出了。

11. (Disputing 9) Randomization sacrifices responsibility. Machines that make decisions based on random choices have no responsibility for their actions, because it is then a matter of chance that they act one way rather than another. Becasue responsibility is necessary for free will, such machines lack free will.

(反驳9) 随机数牺牲了责任意识。凭借随机数做决策的机器不为它们的行为负责,因为如此他们根据机会作出选择。因为责任是自由意志的必要组成部分,这样的机器不具备自由意志。

12. (Supporting 11) A.J.Ayer, 1954, Free will is necessary for moral responsibility.Randomness and moral responsibility are incompatible. We cannot be responsible for what happens randomly any more than we can be responsbile for what is predetermined. Becasue any adequate account of moral responsibility should be grounded in the notion of free will, randomness cannot adequately characterize free will.

(支持11) A. J. Ayer,1954,自由意志是道德责任所必需。随机选择和道德责任是不相容的。我们不能对随机发生的事情比对决定的事情作出更负责的行为。因为任何恰当的道德责任的阐述必须基于自由意志概念,随机选择不能恰当地表征自由意志。

13. (Disputing 11) Jack Copeland, 1993, Random choice and responsibility are compatible.An agent that chooses randomly in a nil preference situation (one in which all choices are equally preferred) is still responsible for its actions. A gunman can randomly choose to kill 1 of 5 hostages. He chooses at random, but he is still responsible for killing the person whom he picks, because he was responsible for taking the people hostage in the first place. Random choice only revokes responsibility if the choice is between alternatives of differing ethical value.

(反驳11) Jack Copeland,1993,随机选择和责任是兼容的。一个在均偏好情形中随机选择工作体(所有选择具有相同的偏好)仍旧对其行为要负责。一个杀手能从5个人质中随机选择一个射杀。他随机选择,但他仍旧要对他射杀他选的这个人负责,因为他首先需要对他将这些人扣为人质(并潜在地杀死)这个行为负责。随机选择只有在选择在具有不同的道德价值的选项之间作出的时候才回避了责任。

14. (Disputing 9) The helpless argument. When agents (human or machine) make choices at random, they lack free will, because their choices are then beyond their control. As J. A. Shaffer (1968) puts it, the agent is "at the helpless mercy of these eruptions within him which control his behavior."

(反驳9) “无助观点”。当工作体(人或机器)随机选择时,他们不具有自由意志,因为他们的选择不为他们控制。如J. A. Shaffer(1968)指出,工作体无助地处在它体内控制其行为的随机爆发的掌控之下。

15. (Disputing 14) Jack Copeland, 1993, The Turing randomizer is only a tiebreaker.The helplessness argument is misleading, because it implies that random processes control all decision making -- for example, the decision of whether to wait at the curb or jump out in front of an oncoming truck. All the Turing randomizer does is determine what a machine will do in those situations in which options are equally preferred.

(反驳14) Jack Copeland,1993,图灵随机发生器只是一个平局突破器。“无助观点”是有误导性的,因为它错误假设了随机过程控制了所有的决策——例如,决定在卡车开过来的时候是呆在人行道上还是跳下去这样的问题。图灵随机发生器其实只是用在所有选项都是相同偏好的情形。

16. (Disputing 2) Jack Copeland, 1993, Being a deterministic machine is compatible with having free will.Humans and computers are both deterministic systems, but this is compatible with their being free. Actions caused by an agent's beliefs, desires, inclinations, and so forth are free, because if those factors had been different, the agent might have acted differently.

(反驳2) Jack Copeland,1993,成为决定性机器是和拥有自由意志兼容的。人类和计算机都是决定性系统,但是这是与他们的自由相兼容的。由一个工作体的信仰,欲望,(性格、偏好——编者)倾向及其它产生的行为是自由的,因为假设这些因素不同,那么工作体的行为也不同。

17. (Supporting 2) Computers only exhibit the free will of their programmers.Computers can't have free will because they cannot act except as they are determined to by their designers and programmers.

(支持2) 计算机只能呈现为它们编程的人的自由意志。计算机不能获得自由意志因为它们不能以不同于他们被设计和编程的方式运作。

18. (Disputing 17) Geoff Simons, 1985, Some computers can program themselves. Automatic programming system (APs) write computer programs by following some of the same heuristics that human programmers use. They specify the task that the program is to perform, choose a language to write the program in, articulate the problem area the program will be applied to, and make use of information about various programming strategies. Programs written by such APs are not written by humans, and so computers that run those programs do not just mirror the free will of humans.

(反驳17) Goeff Simons,1985,有些计算机能自己编程。自编程系统(APs)通过遵循一些人类程序员用的启发式方法写计算机程序。它们能够指定程序要执行的任务,选择一个语言来写这个程序,描述这个程序适用的问题域,并利用关于不同编程策略的信息。有这种系统编写的程序不是人写程序,因此写这些程序的计算机并不简单地反映人类的意志。

19. (Supporting 17) Paul Ziff, 1959, Preprogrammed robots can't have psychological states.Because they are programmed, robots have no psychological states of their own. They may act as if they have psychological states, but only because their programmers have psychological states and have programmed the robots to act accordingly.

(支持17) Paul Ziff,1959,预编程的机器人没有心理活动和状态。因为他们是被编程的,计算机不具有它们自己的心理状态。它们可能会像具有心理状态那样行动,但这只是因为程序员具有心理活动并将其相应地编程到机器人中。

20. (Disputing 19) Ninian Smart, 1964, Preprogrammed humans have psychological states.If determinism is true, then humans are programmed by nature and yet have psychological states. Thus, if determinism is true, we have a counterexample to the claim that preprogrammed entities can't have psychological states.Supported by "Humans are Programmed." Box 4.

de-ter-min-ism: The belief that all actions and events are determined by the influences of nature and history. Human actions result from strict causal laws that describe the brain and its relation to the world. Free will is an illusion.

(反驳19) Ninian Smart,1964,预编程的人具有心理状态。如果决定论是正确的,那么人类是由自然编程的,却也具有心理状态。因此,如果决定论是正确的,我们就有了一个反例,以驳斥预编程的实体不能具有心理状态。被“人类是被编程的”,第4框支持。

决定论:所有行动和实践是由自然和历史的影响所决定的一种观点。人类行为是由严格的描述大脑及其与世界关系的定律所规定的。自由意志是一个幻象。

21. (Supporting 19) The record player argument. A robot 'plays' its behavior in the same way that a phonograph plays a record. It is just programmed to behave in certain ways. For example, "When we laugh at the joke of a robot, we are really appreciating the wit of a human programmer, and not the wit of the robot" (Putnam, 1964, p. 679). (This argument is not so appealing to me who at least knows some advanced stuff that a modern computer can do -- translator)

(支持19) 录音机观点。一个机器人“播放”它的行为就像一台留声机播放碟片。他只是被编程而做出某些行为。例如,“当我们对一个机器人的笑话大笑的时候,我们实际上是在欣赏这个程序员的把戏而不是这个机器人的” (Putnam,1964,679页)。

22. (Disputing 21) Hilary Putnam, 1964, The robot learning response. A robot could be programmed to produce new behavior by learning in the same way humans do. For example, a program that learned to tell new jokes would not simply be repeating jokes the programmer had entered into its memory, but would be inventing jokes in the same way humans do (but how? more details needed -- translator)

(反驳21) Hilary Putnam,1964,机器人的学习反馈。机器人能够被编程为像人类一样通过学习产生新的行为。例如,一个能够学着说新笑话的程序将不会简单重复程序员预先设定的笑话,而会像人类一样发明新的笑话。

23. (Supporting 19) Paul Ziff, 1959, The reprogramming argument. Humans can't be reprogrammed in the arbitrary way that robots can be. For instance, a robot can be programmed to act tired no matter what its physical state is, whereas a human normally becomes tired only after some kind of exertion. The actions of the robot depend entirely on the whims of the programmer, whereas human behavior is self-determined. (To me, this, the physical link, is a very insightful and promising argument -- translator)

(支持19) Paul Ziff,1959,重编程观点。人类不能被像对机器人那样以任意方式重编程。例如,一个机器人能够被编程为无论在任何身心(物理)状况下都觉得累的样子,但人类通常只会在劳作之后才感到疲惫。机器人的行为完全依赖于程序员的奇想,而人类的行为则是自决定的。

24. (Disputing 23) Hilary Putnam, 1964, Reprogramming is consistent with free will.The reprogramming argument fails to show that robots lack free will for the following reasons.
- Human can be reprogrammed without affecting their free will. For example, a criminal might be reprogrammed into a good citizen via a brain operation, but he could still make free decisions (perhaps, for example, deciding to become a criminal once again).
- Robots cannot always be arbitrarily reprogrammed in the way that the programming argument suggests. For instance, if a robot is psychologically isomorphic to a human, it cannot be arbitrarily reprogrammed. (Interesting, structurally the same. This one is very strong, but it assumes such a robot exists -- translator)
- Even if robots can be arbitrarily reprogrammed, this does not exclude them from having free will. Such a robot may still produce spontaneous and unpredictable behavior. (My suggestion is that we need to be very cautious of the use of the word 'unpredictable' esp in what context and for what purpose it is used as this entire discussion is concerned -- translator)

"Look, that robot's been reprogrammed but it still acts spontaneously and unpredictably"

(反驳23) Hilary Putnam,1954,重编程和自由意志不矛盾。重编程观点没能恰当地证明机器人无法获得自由意志,基于以下理由:
- 人类能在不影响他们自由意志的情形下被“重编程”。例如,一个罪犯可能通过脑手术被重编程成为一个好公民,但是他仍能作自由决定(或许,例如,决定是否重新变成一个罪犯)
- 机器人不能总是依照可编程参数所提示的那样被随意编码。例如,如果一个机器人被“心理学地”制作成和人类同构(本人很喜欢这个概念,代数里的这个概念和这里的意义或许极相近——编者),它就不能被随意重编程。
- 即使机器人能被重编程,这也不能使其排除在自由意志之外。如此的机器人将仍能产生自发和不可预测的行为。

“看哪,那个机器人已经被重编程,但它仍旧自说自话”

25. (Supporting 2) L. Jonathan Cohen, 1955, Computers do not choose their own rules.We refer to people as "having no mind of their own" when they only follow the rules or commands of others. Computers are in a similar situation. They are programmed with rules and follow commands without conscious choice. Therefore, computers lack free will. (so powerful that no dispute is given in the map, one more word from the translator, it can be roughly seen that the rule the computer has has to be the same as that the program has, or the (run-time) rule is either, ill-implemented (i.e. a bug), altered erroneously when running, or making no sense -- translator)

(支持2) L. Jonathan Cohen, 1955, 计算机不能选择他们自己的准则。我们将人们说成“没有脑子”当他们盲从别人指定的规则和命令。计算机也遇到类似处境。它们根据规则和命令(一种总体原则——编者)被编程,而没有自觉的选择。因此,计算机不具有自由意志。

26. (Supporting 2) Joseph Rychlak, 1991, Computers can't do otherwise.An agent's actions are free if the agent can do otherwise than perform them. This means that an agent is free only if it can change its goals. But only dialectical reasoning allows an agent to change its goals and thereby act freely. Because machines are not capable of that kind of thinking, they are not free. (again a nice point, but lacking of clarity)
Note: Also, see the 'Can physical symbol systems think dialectically?' arguments on Map 3.

(支持2) Joseph Rychlak,1991,计算机不能以其他方式行事(只能以单一方式做事情——编者)。一个工作体的行动是自由的,只有在该行动体能选择做执行这些行动之外的事。这意味着,一个工作体是自由的,只在其能够改变他的目标。然而,只有辩证逻辑才能允许一个工作体更改其目标并因此自由行动。因为计算机不能做这样的思考,他们不是自由的。

27. (Supporting 2) Selmer Bringsjord, 1992, Free will yields an infinitude that finite machines can't reproduce. Unlike deterministic machines (e.g., Turing machines), persons can be in an infinite number of states in a finite period of time. That infinite capacity allows persons to make decisions that machines could never make. (again too powerful and too close to the fundamental level to be challenged; still more detailed proof needs to be given but it's not so hard as some of the above)

Note: Bringsjord's argument is fleshed out in the 'Can automata think?' arguments on Map 7. Also, see 'Can computers be persons?' arguments on this map.

(支持2) Selmer Bringsjord,1992,自由意志产生无限性而有限(自动)机不能产生之。不像决定性机器(例如图灵机),人能够在一个给定有限时间内处于无限状态。这种无限容量使得人类能做出机器不能仿照作出的选择和行为。

注:Bringsford的论调在第七图“自动机能否思考”中更详细讨论。也见在本张图上“计算机能否成为人?”区的讨论。

(Just finished <1/3 of the entire map 1. To be continued (might be in different posts,本章节未完待续)

(The remaining maps are to be translated and published in different posts,未完章节待续)

你可能感兴趣的:(map)