计算机能思考吗? 图1专题2: “计算机会有感情吗?”


Can Computers Think? The History and Status of the Debate - Map 1 of 7


Issue Area: Can computers have emotions?

问题域: 计算机会有感情吗?


The link to the part of the map this discussion is about: http://www.macrovu.com/CCTWeb/CCT1/CCTMap1Emotions.html


1. Alan Turing, 1950, Yes, machines can (or will be able to) think. A computational system can possess all important elements of human thinking or understanding.

I believe that at the end of the century ... one will be able to speak of machines thinking without expecting to be contradicted.

同第一篇

28. (Disputing 1) Machines can't have emotions. Machines can never be in emotional states (they can never be angry, joyous, fearful, etc.), Emotions are necessary for thought. Therefore, computers can't think.

(反驳1) 机器不会有感情。机器永远无法处于一系列情感状态中(他们从来不能愤怒,开心或恐惧等),情感是思考之必须。因此计算机不能思考。

29. (Supporting 28) Paul Ziff, 1959, The concept of feeling only applies to living organisms. Because robots are mechanistic artifacts, not organisms, they cannot have feelings.

(支持28) Paul Ziff,1959,感觉的概念只适用于生命体。因此机器人是机械制品,而不是有机体,他们不具有直觉。

30. (Disputing 29) J.J.C. Smart, 1964, Having feelings does not imply being a living organism. Although we haven't yet come across any nonliving entities with feelings, perhaps in the future we will. There is no logical contradiction in the idea of nonliving being that has feelings.

Rocks: nonliving and no feelings.

Check!
Salt: nonliving and no feelings.
Check!
Staplers ... check!
Water ... check! (we may need to check again with the respectable buddhist teacher Jingkong and the japanese academic Masaru Emoto he referred to -- translator)
Well, no nonliving things with feelings ... yet.

(反驳29) J.J.C. Smart,1964,有感觉不意味着是一个有机体。尽管我们无法找到有感觉的非生命体,可能在未来我们能找到。但非生命体能具有直觉并没有逻辑矛盾。

31. (Disputing 29) J.J.C. Smart, 1964, We can imagine artifacts that have feelings.Several cases show that artifacts could have feelings. (1) If the biblical account of creation in Genesis were true, then humans would be both living creatures and artifacts created by God. (again a cliche of humans being treated as machines) (2) We could imagine self-replicating mechanisms whose offspring would manifest small random alterations, allowing them to evolve. Such mechanisms might be considered living and at the same time artifacts. (just think about Conway's game of life -- translator)

(反驳29) J.J.C. Smart,1964,我们能想象人造体具有感觉。一些例子显示人造体可能具有感觉。(1) 如果《创世纪》中的叙述成立,那么人类将同时是生命和造物之作。(2) 我们能想象一些自复制机制,它们的后代能够产生随机的微小变化,这样能允许他们演化。如此的机制可以被考虑作同时生命又是人造物。


'They are artifacts of God as well as living creatures.'

32. (Disputing 29) Hilary Putnam, 1964 'Alive' is not definitionally based on structure.Because the definition of 'alive' is not based on structure, it allows for nonhuman robot phsyiologies. Robots made up of cogs and transistors instead of neurons and blood vessels might have feelings because they might actually be alive.

(反驳29) Hilary Putnam,1964 “生存”并不从定义上讲基于结构。因为“生存”的定义并不依赖于结构,它允许非人类的机器人的生理结构。由齿轮和晶体管组成的机器人而不是由神经和血管组成的人类一样也能具有感觉,因为他们可能真的活着。

33. (Supporting 28) Georges Rey, 1980 Machines lack the physiological components of emotion.Machines lack the human physiology that is essential to emotions, for example, the ability to secrete hormones and neuroregulators. Because machines can't reproduce such a physiology through abstract computational processes, they can't process emotions.

(支持28) Georges Rey,1980 机器缺乏情感所必需的生理组件。机器缺乏情感所必需的人类的生理特性。例如,分泌激素和神经递质的能力。因为机器不能通过抽象的计算过程产生这些生理结构、物质或过程,它们不能处理情感。

34. (Disputing 33) Aaron Sloman, 1987 Physiology is not essential to emotion.Human emotion can be implemented on a computer because the relevant features can be modeled (the emotion's interaction with cognitive states, motivations, etc.). The physiological aspects of emotion (which include biochemistry, behavior, and proprioception) are evolutionary remnants; they are not essential. (This generally makes sense to me, however the fact is another thing unknown -- Translator)

(反驳33) Aaron Sloman,1987 生理特性不是情感所必需。人类情感能够在计算机上实现,因为相关的特性能够被建模(以情感与感知状态、动机等的互动的形式)。情感的生理特性方面(包括生物化学、行为和“本体感觉”等)只是进化的残余,它们不是必须的。

35. (Supporting 28) Josepe E Rychlak, 1991 Machines can't think dialectally, and dialectical thinking is necessary for emotions.Emotions are experienced in complicated dialectical circumstances, which require the ability to make judgments about others and gauge oppositions. Machines can't reason in that way, so machines can't experience emotions. Supported by "Symbol Systems Cannot Think Dialectically" Map 3, Box 25. (needs clarification of what dialectical thinking is and does in this context and more detailed evidence to verify that computers are unable to do that)

(支持28) Josephe E Rychlak,1991 机器不能辩证思考,而辩证思维是情感的必要条件。情感是在思辨环境下被体验的,而思辨需要对其他对象的判断和对对立面的衡量。机器无法以这种方式思考,因此机器无法体验情感。被“符号系统不能辩证思考” 图3,框25所支持。

36. (Supporting 28) Emotions are necessary for thought. Only systems that can be in emotional states can be said to think.

(支持28) 情感是思考所必需。只有能处于一系列的情感状态的系统才能被说成是能思考的。

37. (Supporting 36) Geoffrey Jefferson, 1949 Emotion experience is necessary for thought. The only entities that can possess human abilities are entities that can act on the basis of felt emotions. No mechanism can feel anything. Therefore, machines can't possess human abilities, in particular, the ability to think.
Note: Also, see "Mechanisms Can't Possess Human Consciousness." Map 6, Box 10.

(支持36) Geoffrey Jefferson,1949 情感体验是思维所必需。唯一能具有人类能力的实体是那些能根据感觉到的情感而行动者。没有一个机制能感知任何事物。因此,极其不能具有人类的能力,尤其是思考的能力。

38. (Supporting 36) David Gelenter, 1994 Computers must be capable of emotional association to think. In order to think, a computer must be capable of a full spectrum of thought. Computers may be capable of high-end thinking, which is focused, analytic, and goal-oriented. But in order to think as humans do they must also be capable of low-end thinking, which is diffuse, analogical, and associative. For example, a flower and a flowered dress might be associated in low-end thought by a diffuse set of emotionally charged linkages. (Definitely an excellent point, though it's still not sufficient to negate the possibility that such linkages can be made -- translator)

(支持36) David Gelenter,1994 计算机必须具有情感联系能力才能思考。为了思考,计算机必须有能力覆盖整个思维谱系。计算机可能擅长高端的思考,即那些专注的,分析性的和目标导向的。但为了像人一样思考,他们必须也能够做低端思考,即发散的,类比的和关联的。例如,花和含有花的图案的衣衫可能在低端思维层面中通过一组弥散的情绪网络得以联系起来。

39. (Supporting 36) Tom Stonier, 1992 Emotional machines need limbic systems.Emotional machines need the machine equivalent of the human limbic system. The limbic system subserves emotional states, fosters drives, and motivates behavior. It is also responsible for the pleasure-pain principle, which guides the activities of all higher animals. Through the development of artificial limbic systems, emotional machines will be attainable in 20-50 years.

(支持36) Tom Stonier,1992 情感机器需要肢体系统。情感机器需要与人类肢体的等价物。肢体系统对情绪状态有帮助,培养动机,并驱动行为。它同样负责苦与乐原则的实现,它能引导高级动物的活动。通过人造肢体系统的发展,情感机器可能在20到50年内实现。

40. (Supporting 39) Hans Moravec, 1988 Artificial minds should mimic animal evolution.The fastest progress in AI research can be made by imitating the capabilities of animals, starting near the bottom of the phylogenetic scale and working upward toward animals with more complex nervous systems. (One way of bionics being applied to AI; does this argument  intend to imply that evolution is an essential part of the creation of AI -- translator)

(支持39) Hans Moravec,1988 人工思维应当模仿动物进化。最快的人工智能的发展可以通过模仿动物能力的进化过程实现,从接近器官生长最原始阶段到具有复杂的神经系统成形体系。

41. (Disputing 28) Micheal Seriven, 1960, as articulated by Arthur Danto, 1960 If a robot can honestly talk about its feelings, it has feelings.We can determine whether a robot has feelings once we configure it to (1) use English the way humans do, (2) distinguish truth from falsehood, (3) answer questions honestly. We then simply ask, 'Are you conscious of your feelings?' If it says, 'yes,' then it has feelings

(支持28) Micheal Seriven,1960, 由Arthur Danto,1960引述:  如果一个机器人能够“诚实地”谈论它的感觉,它就是有感觉的。我们能够确定一个机器人是否具有感情,一旦我们能够将它配置为: (1) 像人一样使用英语,(2) 辨别真假,(3) 诚实回答问题。然后我们就问它,“你能够意识到你的感觉吗?” 如果它说,“是的”,那么它就是有感觉的。

'Are you conscious of your feelings?'
'Yes'

42. (Disputing 41) Arthur Danto, 1960 The robot's dilemma. Once an advanced robot is built, the way we talk about robots, machines, and feelings will either change or will not. This poses a dilemma. (At first, I felt it nonsensical, but later I found the following might be worth typing in -- translator)
Either English will not change, in which case we will be forced to say the robot is not conscious, because English speakers do not use "conscious" as a predicate for machines.
Or English will change, in which case English can evolve in 1 of 2 ways,
  Either We simply decide to call robots "Conscious," in which case we have an arbitrary and hence unwarranted change in the language.
  Or We constructed a special language that applies exclusively to machines, for example, a language that uses the suffix "-m" to represent the fact that mentalistic terms like "knows" and "conscious" apply to physical events ("know-m", "conscious-m") in machines, in which case words like "conscious-m" would be used for the robot in the same situations in which "conscious" would be used for humans. But a lack of knowledge about how human consciousness might correspond to robot consciousness is precisely the issue at hand.
  In Either Case, No means is provided to tell whether a robot is conscious. At best the question is pushed back.
In Either Case, Simply asking the machine if it has conscious feeling will not help us determine if it does. (it seems to me it is the result of the mismatch between the introduction of robots and the development of the language -- translator)

(反驳41) Arthur Danto,1960 机器人困境。一旦造出一个高级机器人,那么我们谈论机器人,机器和感觉的方式将会或不会改变。这将造成一个困境,如下:

或者英语保持不变,如此我们将被迫说机器人没有意识,因为英语中通常不将“有意识”作为描述机器的谓语;
或者英语将改变,如此英语将可能向以下两者之一演化。
  或者我们将决定称机器人“有意识”,如此我们将对语言做出一个任意而不恰当的改动;
  或者我们将创建一个特殊的语言,它将只适用于机器,例如,一个用后缀“-m"表示诸如“了解”,“意识”等的精神活动术语用于对应的机器的物理活动(“了解-m”,“意识-m”等)的情形。如此,诸如“意识-m”这样的词将被用于机器人就像“意识”这样的词用于人类。不过缺乏对人类意识活动如何与机器的意识活动对应关系的了解将成为一个首要的问题。
  在任一以上两个情况下,没有办法说明是否一个机器人能有意识。至多问题回退到上一层。
在任一以上两个情况下,仅仅考察机器是否具有意识知觉不能帮助我们确定他是否真的具有。


43. (Disputing 28 -- shouldn't it actually be supporting, such a doubt can be seen in other tags in this map or other -- translator) Paul Weiss, 1960 Machines cannot love or be loved.Machines, which are mere collections of parts, cannot love or be loved. Only unified wholes that govern their parts, such as humans, have the capacity to love what is lovable or be loved by those who love. Machines fail on both counts, so they are subhuman and lack minds. (How about the romances related by science fiction movies? -- translator)

(反驳28)Paul Weiss, 1960 机器无法爱与被爱。机器,仅仅是部件的集合,不能爱与被爱。只有能够支配其各个部件的统一整体,例如人,才具有去爱其所爱或被施爱者所爱的能力。

44. (Disputing 28) Margaret Boden, 1977 Emotions are cognitive schemata. What is essential to emotion is the schema of cognitive evaluation that determines the relationship between the emotion and the rest of the cognitive states of the subject. In order for machines to have emotions, they must model the complex interactions involved in the use of such concepts as pride, shame, and so forth. Furthermore, these concepts must be (partially) responsible for the behavior of the system. (Does 'responsible' in this context have any particular implication? -- translator)

(反驳28) Margaret Boden,1977 情感是认知图式。情感之要素是决定情感和这个主体的认知状态的其他部分的关系的认知评估的图式。为了机器能具有情感,它们必须对包含在诸如傲慢、羞愧等概念的运用中的复杂交互关系进行建模。进而,这些概念必须(部分地)负责这个系统的行为。

45. (Disputing 28) Daniel Dennet, 1978 Our intuition about pain are incoherent. At present, it's easy to criticize the possibility of robot pain, but only because our everyday understanding of pain is incoherent and self-contradictory. For example, morphine is sometimes described as preventing the generation of pain, and sometimes as just blocking pain that already exists. But those are inconsistent descriptions. Once we have a coherent theory of pain, a robot could in principle be constructed to instantiate that theroy and thereby feel pain.

(反驳28) Daniel Dennet,1978 我们对疼痛的直觉不是内在一致的。现在,很容易对机器人的痛觉的可能性作出批判,但这只是因为我们通常对痛的理解本身是不连贯且自相矛盾的。例如,吗啡有事被认为是防止疼痛的发生,但有时只是用于阻断已经存在的病痛。但这些是不一致的描述。一旦我们拥有了一个完整的痛觉理论,理论上说机器人能够被创建并实践这套理论,并因而感觉到痛。

"Morphine prevents the generation of pain"
"No! Morphine just blocks the pain that already exists."
"Once we clear up these confusions, we can implement a theory of pain on a computer."

46. (Disputing 28) Michael Dyer, 1987 Emotions can be modeled by describing their relations to their cognitive states.Modeling emotions involves two tasks: (1) the schematic task of programming a system to understand emotions, and (2) the functional/behavioral task of programming a system to behave emotionally through the interaction of emotional states and other cognitive states, such as planning, learning, and recall.

(反驳28) Michael Dyer,1987 情感能够通过描述他们的与认知状态的关联而建模。建模认知包含两个任务:(1) 设计编程一个系统以认识情感的框架性工作,和(2)设计编程一个系统以使之通过情感状态和其他认知状态如计划,学习和会议之间的交互而在其行为中表现出情感来。

47. (Supporting 46) Implemented Model BORIS. BORIS is a narrative reader designed to understand description of the emotional states of narrative characters. BORIS can predict the emotional responses of characters and interpret those responses by tracing them back to their probable causes.

(支持46) 已实现系统 BORIS。BORIS是一个叙述分析器,它被设计成能够理解对故事人物的情感状态的描述。BORIS能够预测角色的情绪反应并通过跟踪这些反应的可能成因来解释它们。

48. (Supporting 46) Implemented Model OpEd. OpEd is an editorial reader that deals with non-narrative editorials -- for example, critical book  reviews. The program tracks the beliefs of the writer as well as the beliefs the writer ascribes to his or her critics. Unlike BORIS, OpEd is able to deal with non-narrative texts, in which "The writer explicitly supports one set of beliefs while attacking another."

(支持46) 已实现系统 OpEd。OpEd是一个评论分析器,它能处理非叙述性的评论文章——例如,书评。这个程序跟踪作者的观点以及被作者引述其他批评家的观点。不同于BORIS,OpEd能够处理非叙述性文字,其中“作者会明确地支持一组观点而攻击另一组”。

49. (Supporting 46) Implemented Model DAYDREAMER, DAYDREAMER is a stream of thought generator that specifies how representations of emotional states affect other forms of cognitive processing. It does this by concocting "daydreams" of possible outcomes and reactions and then using those daydreams to represent the stream of consciousness of the system.

(支持46) 已实现系统 “白日做梦者”(DAYDREAMER)。白日做梦者是一个意识流生成器,它能规定情感状态如何影响其他形式的认知处理。它通过构造反映可能的结果和反应的“白日梦”并使用这些“白日梦”去表示这个系统的意识流。

50. (Disputing 28) Aaron Sloman, 1987 Emotions are the solution to a design problem. Emotions (both in organic creatures and in artificial creations) are the solution to a design problem -- how to cope intelligently with a rapidly changing environment, given established goals and limited processing resources. In both humans and machines the problem is solved with intelligent computational strategies.

(反驳28) Aaron Sloman,1987 情感是对一个设计问题的解决方案。情感(对有机生物和人工构造)是一个解决设计问题的方案——如何在给定目标和资源有限的情况下适应一个快速变化的环境。对人类和机器而言,这个问题都是通过智能的计算策略解决的。

51. (Disputing 28) Nico Frijda and Jaap Swagerman, 1987 Emotions are manifestations of concern realization. Emotional states result from a "concern realization system" that matches internal representations against actual circumstances in order to cope with an uncertain environment. Computers that implement the concern realization sytem go through emotional states.

(反驳28) Nico Frijda和Jaap Swagerman,1987 情感是关切的实现的呈现。情感状态产生于一个将内部表述和实际情况匹配以应对不确定环境的“关切实现系统”。实现这个关切实现系统的计算机经历情感状态。

52. (Disputing 28) Emotions are cognitive evaluations. Emotions are determined by the structure, content, and organization of knowledge representations and the processes that operate on them. A machine equipped with the correct knowledge-handling mechanisms, which result in appropriate behavior, will have emotions.

(反驳28) 情感对认知评估过程。情感是由结构,内容和知识表述的组织和运行其上的过程决定的。装备有正确的知识处理机制的系统——因而有恰当的行为——具有情感。

53. (Dipsuting 52) Michael Arbib, 1992 Emotions color perception and action. Cognitive appraisal, in the form of knowledge representation plus appropriate behavior, is not enough to convert bare information processing into emotion. Such a theory does not account for the fact that emotions can color one's perceptions and actions. For example, the perception of a winning touchdown in a football game could be computationally modeled as knowledge representation plus appropriate behavior. But this doesn't account for the differently colored perceptions of fans of opposing teams. (Couldn't see any problem with this and why this can't be attained by machines if I didn't understand it wrong -- translator)

(反驳52) Michael Arbib,1992 情感会为察和行动设定色彩。以知识表述和相应行为的形式出现的认知评价,不足以将普通信息转换为情感的。例如,对足球比赛中的赢球的观察可以在计算上实现为知识表述加相应行为。但这不能说明对方球迷的不同情绪色彩的观察。

54. (Disputing 28) Philip Johnson-Laird, 1988a Feelings are information signals in a cognitive system.Feelings are needs and emotions, which correspond to information signals of two kinds: (1) needs, which arise from lower-level distributed processors that monitor certain internal aspects of the body; (2) emotions, which also arise from lower-level distributed processors but originate as cognitive interpretations of external events, especially social events. A robot could have feelings if its computational structure implemented those 2 kinds of signals. (sounds quite sensible to me, again the image of the future when humans and  machines stepping towards each other from opposing directions to close the gap appears in my mind -- translator)

(反驳28) Philip Johnson-Laird,1988a 感觉是在一个认知系统中的信号。感觉是需求和情绪,对应于两类信号:(1) 需求,从下层监视体内状况的分布式处理器产生;(2) 情绪,也从下层分布式处理器产生,但以对外部事件,尤其是社会事件的认知转换的形式产生。如果计算结构实现了这两种信号,机器人就能具有感觉。

55. (Disputing 28) Aaron Sloman and Monica Croucher, 1981 Emotions are the product of motivational representations. Emotions result from interactions between motives and other cognitive states. Motives are representations of states of the world to be achieved, prevented, and so forth. A robot with the proper motivational processes will have emotions.

(反驳28) Aaron Sloman和Monica Croucher,1981 情感是动机的表述的产物。情绪来自于动机和其他认知状态的交互。动机是想要达到、避免或其他的目标状态的表述。一个正确实现动机过程机器人是具有感情的。

56. (Supporting 55) Aaron Sloman, 1987 Hierarchical theory of affects. Emotional states arise from hierarchically structured dispositional states, that is, tendencies to behave in certain ways given certain circumstances. Higher-level dispositions influence lower-level dispositions, which in turn influence external behavior.

(反驳28) Aaron Sloman,1987 情绪的层级理论。情感状态由层级结构化的性情状态产生,性情状态即在特定的状况下采取相应行为的倾向。高层级的性情影响低层级的性情,进而影响外部行为。

57. (Disputing 28) Geoff Simons, 1985 Emotion is a type of information processing.Once we understand the biochemical and cybernetic aspects of human emotions (easier said than done -- translator), we will be able to build computers with emotions. (Highly connected to argument 54 -- translator)

(反驳28) Geoff Simons,1985 情绪是一种信息处理。一旦我们理解了人类情感的生物化学和神经网络特点,我们就能建立具有情感的计算机。

58. (Disputing 28) Geoff Simons, 1985 The Turing test provides evidence for emotions as well as for intelligence. Because behavior is an important part of determining whether a system has emotions, the Turing test is useful as a test for emotional capacities as well as for general intelligence. If a robot can pass the Turing test and if it has a cognitively plausible internal structure, then it can have emotions.
Note: Also, see Map 2.

(反驳28) Geoff Simons,1985 图灵测试能同时给出对情感和智能的存在证明。因为行为是决定系统是否具有情感的重要标志,图灵测试同时测试情感能力和一般智能。如果一个机器人能通过图灵测试,并且它具有合理的内在认知结构,那么它可具有情感。

"I really do feel bad." -- some machine
"Yes, that's a person." -- a human judge

你可能感兴趣的:(计算机)