流利说-懂你英语-个人笔记 Level8-Unit1-Part2:On Controlling AI

英语流利说 Level8 Unit1 Part2 : On Controlling AI
Sam Harris: Can we build AI without losing control over it?
TEDSummit • 14:27 • Posted September 2016

Can we build AI without losing control over it?
我们能造个不失控的人工智能吗?
L8-U1-P2: On Controlling AI 1

1
I'm going to talk about a failure of intuition that many of us suffer from.
我要谈一个许多人都错误理解的直觉。
开头的I'm going to我完全是猜的,听起来像much of about。

2
It's really a failure to detect a certain kind of danger.
在察觉某种危险方面它无疑是失败的。
a certain kind of 某种形式的……;某一种的……
eg:This is a certain kind of difficulty.

3
I'm going to describe a scenario that I think is both terrifying and likely to occur,
我要描绘一个我认为骇人且可能发生的场景,

4
and that's not a good combination, as it turns out.
事实证明,这不是个好的组合。
这里的combination指的是both terrifying and likely to occur。
as it turns out听起来像as transout。

5
And yet rather than be scared, most of you will feel that what I'm talking about is kind of cool.
并非感到害怕,你们中的多数人反而觉得我要说的有点酷。

6
I'm going to describe how the gains we make in artificial intelligence could ultimately destroy us.
我将讲述我们在人工智能上的成果如何最终毁灭我们。

7
And in fact, I think it's very difficult to say how they won't destroy us or inspire us to destroy ourselves.
事实上,我认为就它们是否会毁灭我们或者让我们自相残杀这一点上,很难讲。

8
And yet if you're anything like me, you'll find that it's fun to think about these things.
然而如果你有那么一点像我,你就会发现思考这些事情很有趣。

9
And that response is part of the problem. OK? That response should worry you.
这种反应是问题的一部分。对吗?这种反应应该会让你担心。
that response指it's fun to think about these things。关于人工智能会毁灭人类这个问题既有趣,也很让人担忧。
这个OK只出现了一瞬间。

10
If I were to convince you in this talk that we were likely to suffer a global famine, either because of climate change or some other catastrophe,
如果我想要在这个演讲中说服你,我们很可能迎来一个全球饥荒,要么是因为气候变化,要么是其它灾难,
famine n. 饥荒;饥饿,奇缺
eg:Many old people had experienced a national famine, so they are very thrift.

11
and that‘s your grandchildren, or their grandchildren, are very likely to live like this, you wouldn't think, "Interesting. I like this TED Talk."
你的孙辈,或他们的孙辈,很可能过着这样的生活,你不会想说:“有趣,我喜欢这个TED演讲。”

12
Famine isn't fun. Death by science fiction, on the other hand, is fun,
饥荒不有趣。然而,科幻小说中的死亡是有趣的,

13
and one of the things that worries me most about the development of AI at this point is that we seem unable to marshal
此时,关于AI发展中,最令我担忧的事就是我们似乎不能整理
marshal n. 元帅;司仪;vt. 整理;引领;编列
eg:These thing are in a mess, so how can I mashal them?

14
an appropriate emotional response to the dangers that lie ahead.
对即将到来的危险做出适当的情绪反应。
lie ahead 即将来临;在前面
eg:Facing the challenge directly, then the success lies ahead.

15
I am unable to marshal this response, and I'm giving this talk.
我无法给出答案,我只能做了这个演讲。

L8-U1-P2: On Controlling AI 2

16
It's as though we stand before two doors.
这就如同我们站在两扇门前。

17
Behind door number 1, we stop making progress in building intelligent machines.
在1号门后,我们停止在智能机器方面的建设。

18
Our computer hardware and software just stops getting better for some reason.
我们电脑硬件和软件由于某种原因停止改进。

19
Now take a moment to consider why this might happen.
现在花点时间思考下为什么这可能发生。

20
I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are all able to.
我意思是,考虑到智能和自动化的价值,如果我们都有能力,我们将继续提高我们的科技。
这里的given是“考虑到”的意思
eg:Given he's a green hand, the thing that he has done such a stupid mistake is pardonable.

21
What could stop us from doing this?
什么能阻止我们这么做呢?

22
A full-scale nuclear war?
一个全面的核战争?

23
A global pandemic?
一个全球流感?

24
An asteroid impact?
一个行星撞地球?

25
Justin Bieber becoming president of the United States?
Justin Bieber成为美国总统?

26
The point is, something would have to destory civilization as we know it.
问题是,就我们所知,有些东西会毁灭文明。
as we know it 正如我们所知道的那样;正如我们所知
eg:The point is, some bad behavior would have to leave a bad impression for other people as we know it.

27
You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation.
你可以想象下,一代又一代,永久性地阻止科技进步会有多么糟糕。

28
Almost by definition, this is the worst thing that's ever happened in human history.
几乎可以确定,这是人类历史上所发生的最糟糕的事。

29
So the only alternative, and this is what lies behind door number two,
所以唯一的选择,就是二号门背后的东西,

30
is that we continue to improve our intelligent machines, year after year after year.
就是我们持续改进我们的智能机器,年复一年。

31
At a certain point, we will build machines that are smarter than we are,
某种意义上,我们将会造出比我们更聪明的机器。
at a certain point 某种意义上

32
and once we have machines that are smarter than we are, they will begin to improve themselves.
而一旦我们造出比我们更聪明的机器,他们将开始自我改进。

33
And then we risk what the mathematician IJ Good called an "intelligence explosion," that the process could get away from us.
然后我们冒着数学家IJ Good所说的“智能爆炸”的风险,这个进程会拜托我们的控制。
当智能机器达到与人类智力相当的临界点后,人类将没有还手之力了,因为机器自我学习的速度远超人类。

34
Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us.
这通常被画为漫画,就如我这里所展示的,我们害怕残暴的机器人军队会攻击我们。
caricature v. 把……画成漫画;滑稽地描述,使滑稽化
eg:Many famous articles scenario has been caricatured.

35
But that isn't the most likely scenario.
但这不是最可能的场景。

36
It's not that our machines will become spontaneously malevolent.
不是我们的机器会变得越来越残暴。
spontaneously adv. 自发地;自然地;不由自主地
malevolent = malicious adj. 恶毒的;有恶意的;坏心肠的
eg:If a person often does some bad unethical things, he or she will become spontaneously malvolent.

37
The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.
这个真正的顾虑是,我们将会造出比我们更有能力的机器,以至于他们的目标和我们的目标有些微的分歧,就会毁灭我们。
divergence n. 差异;分歧;分散,发散;(气流或海洋的)分开处
eg:Divergence will always exist, but the attitude that you want to express determines whether or not you can decrease it.

38
Just think about how we relate to ants.
想一下我们和蚂蚁的关系。

39
We don't hate them. We don't go out of our way to harm them.
我们不恨他们,我们不故意伤害他们。
go out of the way 故意;不怕麻烦
eg:We don't go out of our way to harm anyone, but it doesn't mean that we afraid trouble.

40
In fact, sometimes we take pains not to harm them. We step over them on the sidewalk.
事实上,有时候我们不是费力伤害他们。我们只是跨过人行道而已。
take pains 尽力,费苦心;耐心
step over 跨过;单步执行;不进入函式;逐过程
eg:Take pains as soon as possible, then you will step over all the difficulties.

41
But whenever their presence seriously conflicts with one of our goals, let's see when constructing a building like this one,
但是无论何时,他们的存在严重违背了我们其中的一个目标,我们看下当建造这样一个东西时会发生什么,

42
we annihilate them without a qualm.
我们毫不犹豫毁灭之。
annihilate vt. 歼灭;战胜;废止
qualm n. 疑虑;不安
eg:Annihilate, destory, is the real history of human beings, and going forward with a qualm is a right way to make progress.
《三体》里面有一句话:毁灭你,与你何干?

43
The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.
令人担忧的是,我们终有一天,会制造一个无论它是否有意识,都会用同样的无视来对待我们的机器。
disregard v. 忽视,无视,不尊重;n. 忽视,无视,不尊重
eg:Treat people with disregard will get disregard.

44
Now, I suspect this seems far-fetched to many of you.
我猜这对你们中的许多人来说看上去有点牵强。
far-fetched adj. 牵强附会的
eg:This new is a little far-fetched to explain whether there is alien.

45
I bet there are those of you who doubt that super intelligent AI is possible, much less inevitable.
我敢说你们中有人怀疑超级人工智能是否有可能,更不用说不可避免了。
much less 更不用说;不及
eg:We even don't understand his meaning, much less help him.

46
But then you must find something wrong with one of the following assumptions.
但是你必须找到以下假设中的错误。

47
And there are only three of them.
只有3个。

48
Intelligence is a matter of information processing in physical systems.
智能,在物理体系中,是一种信息处理问题。

49
Actually, this is a little bit more than an assumption.
事实上,这不止是个假设。

50
We have already built narrow intelligence into our machines, and many of these machines perform at a level of super human intelligence already.
我们已经在我们的机器中制造了弱智能,其中许多机器已经在超人类智能的水平上运行了。

51
And we know that mere matter can give rise to what is called "general intelligence," an ability to think flexibly across multiple domains.
我们知道,仅仅是物质就可以产生所谓的“一般智力”,这是一种横跨多个领域灵活思考的能力。
give rise 引起;招致
eg:Bad sleep could give rise to bad work.

52
because our brains have managed it.
因为我们的大脑已经做到了。

53
There's just atoms in here, as long as we continue to build systems of atoms that display more and more intelligent behavior,
它们在这里就是原子,只要我们继续建造原子系统,展示越来越多的智能行为,

54
we will eventually, unless we are interrupted, we will eventually build general intelligence into our machines.
我们将最终,除非我们被中断,我们将最终在机器中建造通用智能。

55
It's crucial to realize that the rate of progress doesn't matter.
重要的是要意识到进步的速度并不重要。

56
because any progress is enough to get us into the end zone.
因为任何的进步都足以把我们逼到绝境。

57
We don't need Moore's law to continue.
我们不需要摩尔定律来维持。

58
We don't need exponential progress.
我们不需要指数增长。

59
We just need to keep going.
我们只需继续下去。

60
The second assumption is that we will keep going.
第二个假设是我们将继续下去。

61
We will continue to improve our intelligent machines.
我们将继续改进我们的智能机器。

62
And given the value of intelligence.
考虑到智能的价值。

63
intelligence is either the source of everything we value or we need it to safeguard everything we value. It is our most valuable resource.
智能要么是我们所珍惜的万物之源,要么就是我们需要它保护我们珍视的东西。它本身就是我们最珍贵的资源。

64
So we want to do this. We have problems that we desperately need to solve.
所以我们需要做这件事。我们有急需要被解决的问题。

65
We want to cure diseases like Alzheimer's and cancer.
我们希望治愈老年痴呆症和癌症等疾病。

66
We want to understand economic systems. We want to improve our climate science.
我们想要理解经济系统。我们想要提高我们的气候科学。

67
So we will do this, if we can.
如果可以的话,我们会这么做。

68
The train is already out of the station, and there's no brake to pull.
开弓没有回头箭。
brake n. 刹车;阻碍(物)

69
Finally, we don't stand on a peak of intelligence, or anywhere near it, likely.
最后,我们不可能站在智力的顶峰上,也不可能站在它附近。

70
And this really is the crucial insight. This is what makes our situation so precarious,
这是最重要的洞察。这使得我们的处境很危险,
precarious adj. 危险的;不确定的
eg:This forest is precarious, because there are wolves at night.

71
and this is what makes our intuitions about risk so unreliable.
这使得我们关于风险的直觉非常不靠谱。

72
Now, just consider the smartest person who has ever lived.
现在,想想有史以来最聪明的人。

73
On almost everyone's shortlist here is John von Neumann.
几乎每个人的候选名单上都有约翰·冯·诺伊曼。

74
I mean, the impression that von Neumann made on the people around him,
冯·诺依曼给他周围的人留下的印象,

75
and this included the greatest mathematicians and physicists of his time, is fairly well-documented.
包括了他那个时代最伟大的数学家和物理学家,这些都有详细的记录。
well-documented adj. 证据充分的
eg:It was well documented that he had done it.

76
If only half the stories about him are half true, there's no question he's one of the smartest people who has ever lived.
如果关于他的故事的一半中只有一半是真实的,毫无疑问,他是有史以来最聪明的人之一。

77
So consider the spectrum of intelligence.
思考一下智力的范围。

78
Here we have John von Neumann.
这里是约翰·冯·诺伊曼。

79
Then we have you and me.
我和你在这。

80
And then we have a chicken.
鸡在这

81
Sorry, a chicken.
不好意思,在这里。

82
There's no reason for me to make this talk more depressing than it needs to be.
我没有理由把这个演讲弄得比实际情况更令人沮丧。

83
It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive,
然而,似乎非常有可能的是,智力的范围比我们目前想象的要延伸得更远,
在这里overwhelmingly = really

84
and if we build machines that are more intelligent than we are,
如果我们制造出比我们更聪明的机器

85
They will very likely to explore this spectrum in ways that we can't imagine and exceed us in ways that we can't imagine.
它们很可能用我们无法想象的方式扩展这个范围,以及用我们无法想象的方式超过我们。

L8-U1-P2: On Controlling AI 3

86
And it's important to recognize that this is true by virtue of speed alone.
单就速度而言,这是真实的,知道这一点很重要。
by virtue of 由于,凭借
eg:He has got the offer from that company by virtue of friends' acquaintance.

87
Right? So imagine if we just built a super intelligent AI that was no smarter than your average team of researchers at Stanford or MIT.
对吧?想象一下,如果我们只是建立了一个超级智能的人工智能,它并不比你在斯坦福大学或麻省理工学院的普通研究团队更聪明。

88
Well, electronic circuits function about a million times faster than biochemical ones,
电子电路运行速度比生物个体快100万倍,

89
so this machine should think about a million times faster than the minds that built it.
所以这个机器应该比制造它的大脑思考速度快100万倍。

90
So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week.
所以你让他运行一周,它将完成20000年的人类水平的智力活动,周复一周。

91
How could we even understand, much less constrain, a mind making this sort of progress?
所以我们如何理解,更不必说约束,这种进化水平的大脑呢?

92
The other thing that's worrying, frankly, is that, imagine the best case scenario.
另一个令人担忧的事是,想象一下最好的情况。

93
So imagine we hit upon a design of super intelligent AI that has no safety concerns. We have the perfect design the first time around.
想象下,我们碰巧设计出了个超级人工智能,没有安全顾虑,我们在第一次就拥有了完美的设计。
hit upon 偶然发现,偶然碰到
eg:I hit upon a better way to make my sleep well.

94
It's as though we've been handed an oracle that behaves exactly as intended.
仿佛我们被赠予了一个称心如意的礼物。

95
Well, this machine would be the perfect labor-saving device.
这个机器会是完美的节省人力的设备。

96
It can design the machine that can build the machine that can do any physical work, powered by sunlight, more or less for the cost of raw materials.
它能设计出能制造出做任何物理工作的机器,太阳能供能,或多或少的消耗原材料。
more or less 或多或少
eg:More or less, you have some merits, just think yourself deeply.

97
So we're talking about the end of human drudgery.
所以我们在谈论人类苦工的终结。

98
We're also talking about the end of most intellectual work.
我们也在谈论大多数智力工作的终结。

99
So what would apes like ourselves do in this circumstance?
所以类人猿在这种情况下会做什么呢?
ape n. [脊椎] 猿;傻瓜;模仿者

100
Well, we'd be free to play Frisbee and give each other massages.
我们可以自由地玩飞盘,互相按摩。
Frisbee n. (投掷游戏用的)飞盘

101
Add some LSD and some questionable wardrobe choices, and the whole world could be like Burning Man.
嗑一些迷幻药,穿着奇装异服,全世界就像火人节。
LSD 麦角酰二乙胺,被认为是当代最惊奇、最强烈的迷幻剂,有强烈的致幻作用
wardrobe n. 衣柜;行头;全部戏装
Burning Man 火人节,是一个宣扬充分彰显自我、创新和社区的节日

102
Now, that might sound pretty good, but ask yourself what would happen under our current economic and political order?
这听上去不错,但是问下你自己,在现有经济和政治秩序下会发生什么?

103
It seems likely that we would witness a level of wealth inequality and unemployment that we have never seen before.
这看上去我们可能会见证一系列我们从未见过的财富不平等和失业。

104
Absent a willingness to immediately put this new wealth to the service of all humanity,
没有立即将这些新财富用于全人类服务的意愿,

105
a few trillionaires could grace the covers of our business magazines while the rest of the world would be free to starve.
一些亿万富翁能够登上商业杂志的封面,而世界上其它人却在疯狂挨饿。
be free to 自由去做…,任意
在这里be free to有讽刺意味,富人们能光鲜亮丽,穷人却在过着食不果腹的生活。

106
And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a super intelligent AI?
如果俄罗斯人和中国人知道一些硅谷的公司将要怕配置一个超级人工智能,他们作何感想?

107
This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power.
这台机器将能够以前所未有的力量发动战争,无论是地面战争还是网络战争。
wage n. 工资;报酬;代价;报应;v. 进行,发动(运动、战争等);开展
cyber adj. (与)计算机或网络(有关)的
unprecedented adj. 空前的;无前例的
eg:The company has waged an unprecedented cyber revolution about the application of block chain.

108
This is a winner-take-all scenario.
这是个赢者通吃的局面。

109
To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.
在这里6个月的竞争领先就是至少500,000年的领先。

110
So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk.
因此,似乎仅仅是关于这类突破的谣言就能让我们人类抓狂。
berserk adj. 狂怒的,失控的:(激动得)控制不住的
eg:Don't cheat, or your teacher would go berserk.

L8-U1-P2: On Controlling AI 4

111
Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring.
在此刻,我认为最可怕的事之一就是人工智能研究者们寻求安慰时所说的话。
reassuring adj. 安心的;可靠的;鼓气的
eg:Don't complain, I think you'd rather say some reassuring words.

112
And the most common reason we're told not to worry is time.
告诉我们不要担心的最常见的理由就是时间。

113
This is all a long way off, don't you know. This is probably 50 or 100 years away.
还有很长时间,你不知道嘛。可能50年或100年开外。
a long way off 还有好长一段距离
eg:Due to the influence of COVID - 19 outbreak, this is all a long way off to recover the economy.

114
One researcher has said, "Worrying about AI safety is like worrying about overpopulation on Mars."
一个研究人员说过:“担忧人工智能安全就如同担忧火星上人口过多。”

115
This is the Silicon Valley version of "don't worry your pretty little head about it."
这是硅谷版本的“不要担心你那小小的脑袋。”

116
No one seems to notice that referencing the time horizon is a total non sequitur.
似乎没有人注意到推断这个时间范围是不合乎逻辑的。
sequitur n. 推断,结论(自前提演变的)
eg:Many people believe that the bad luck owns to fate, I think it's a total non sequitor.

117
If intelligence is just a matter of information processing and we continue to improve our machines, we will produce some form of super intelligence.
如果智能就是信息处理,并且我们持续改进我们的机器,我们将制造出一种超级智能。

118
And we have no idea how long it will take us to create the conditions to do that safely.
我们不知道需要多长时间才能创造安全的条件。

119
Let me say that again. We have no idea how long it will take us to create the conditions to do that safely.
再强调一下。我们不知道需要多长时间才能创造安全的条件。

120
And if you haven't noticed, 50 years is not what it used to be. This is 50 years in months.
如果你未曾注意到,50年已不是以前的50年了。而是以月来算的50年。

121
This is how long we've had the iPhone.
这是我们拥有iPhone的时间。

122
This is how long "The Simpsons" has been on television.
这是“辛普森一家”出现在电视的时间。

123
50 years is not that much time to meet one of the greatest challenges our species will ever face.
去面对我们有史以来遇到的最大的挑战之一,50年不算很长的时间。

124
Once again, we seem to be failing to have an appropriate emotional response to what we have every reason to believe is coming.
再一次,我们似乎对于百分百确认的事情没有一个正确的感性回应。
前面,我们认为超级人工智能不会来,这一次我们认为50年很短。
eg:Once again, I unable to have a right response to what I have every reason to believe is right.

125
The computer scientist Stuart Russell has a nice analogy here. He said, imagine that we received a message from an alien civilization,
电脑科学家Stuart Russell有一个很好的比喻。他说:“想象一下我们收到了一个外星文明的信息,
analogy n. 类比;类推;类似
eg:There is a very nice analogy to describe what is the feeling of love: It just like I have an armour one moment, it also like I have a weakness instantly.
Stuart Russell正好也有一个TED,叫做:3 principles for creating safer AI,可以看作本TED的延续。

3 principles for creating safer AI | Stuart Russell

126
which read: "People of Earth, we will arrive on your planet in 50 years. Get ready."
上面写着:地球人,我们将在50年后登录你们的星球。做好准备吧。

127
And now we're just counting down the months until the mothership lands?
现在我们就数着日子等待外星人降临吗?

128
We would feel a little more urgency than we do.
我们会有一点紧迫感。

129
Another reason we're told not to worry is that
另一个告诉我们不要担忧的理由是

130
these machines can't help but share our values because they will be literally extensions of ourselves.They'll be grafted onto our brains,
那些机器会情不自禁共享我们的价值观,因为它们基本上就是我们自己的延伸。它们会被植入我们的大脑中,
graft v. 嫁接;移植;辛苦地工作;贪污
eg:Many TV plays are now grafted onto many advertisements.

131
and we'll essentially become their limbic systems.
我们最终会变成它们的边缘系统。

132
Now take a moment to consider that the safest and only prudent path forward, recommended,
现在花点时间思考下最安全也是唯一谨慎的前进道路,我建议,

133
is to implant this technology directly into our brains.
直接把这个科技植入到我们大脑。

134
Now, this may in fact be the safest and only prudent path forward,
现在,这可能事实上是最安全和唯一谨慎的前进道路,

135
but usually one's safety concerns about a technology have to be pretty much worked out before you stick it inside your head.
但是通常,一个人对科技的安全顾虑必须在把它植入到大脑前充分考虑。

136
The deeper problem is that building super intelligent AI on its own seems likely to be easier
更深的问题是由机器自己制造超级人工智能似乎更简单

137
than building super intelligent AI and having the completed neuroscience that allows us to seamlessly integrate our minds with it.
相比于人类自己制造超级人工智能,以及拥有完整的神经科学来让我们的大脑与机器无缝集成。

138
And given that the companies and governments doing this work are likely to perceive themselves to be in a race against all others,
考虑到在做这个工作的公司和政府可能感到自己在于其他人竞争,

139
given that to win this race is to win the world, provided you don't destroy it in the next moment,
考虑到赢得这个竞争就是赢得这个世界,只要你不在下一刻毁掉它,

140
then it seems likely that whatever is easier to do will get done first.
看上去更简单的事情会先被解决。

141
Now, unfortunately, I don't have a solution to this problem,apart from recommending that more of us think about it.
不幸的是,我对此也没办法,除了建议我们更多的人去思考这个问题。

142
I think we need something like a Manhattan Project on the topic of artificial intelligence.
我认为我们需要一个关于人工智能的曼哈顿计划。

143
Not to build it, because I think we'll inevitably do that,
不是去建造它,因为我认为我们终将这么做,

144
but to understand how to avoid an arms race and to build it in a way that is aligned with our interests.
而是要理解如何避免军备竞赛,如何在符合我们利益的情况下进行军备竞赛。

145
When you're talking about super intelligent AI that can make changes to itself,
当你在谈论能自我变更的超级人工智能,

146
it seems that we only have one chance to get the initial conditions right,
似乎我们我们只有一次机会使得最初的条件正确,

147
and even then we will need to absorb the economic and political consequences of getting them right.
即使到那时,我们也需要消化正确处理这些问题所带来的经济和政治后果。

148
But the moment we admit that information processing is the source of intelligence,
此刻我们承认信息处理就是智能的源头,

149
that some appropriate computational system is what the basis of intelligence is,
一些正确的计算系统就是智能的基础,

150
and we admit that we will improve these systems continuously,
我们承认,我们将不断改进这些系统

151
and we admit that the horizon of cognition very likely far exceeds what we currently know,
我们也承认认知的边界很可能远超过我们目前所了解的,

152
then we have to admit that we are in the process of building some sort of god.
我们必须承认我们处于造神的过程。

153
Now it would be a good time to make sure it's a god we can live with.
现在是个确定这是个我们能与之共存的神的好时机。

154
Thank you very much.
非常感谢。

你可能感兴趣的:(流利说-懂你英语-个人笔记 Level8-Unit1-Part2:On Controlling AI)