【学习笔记】懂你英语 核心课 Level 7 Unit 2 Part 3(IV)On Machine Intelligence 4

TED Talk    Machine intelligence makes human morals more important  机器智能使人类道德更重要  Speaker: Zeynep Tufekci    第四课

Audits are great and important, but they don't solve all our problems.    【跟读】    审查是伟大而重要的,但它们并不能解决我们所有的问题。

Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow.  采取Facebook的强大的新闻feed算法-你知道,对一切进行排名,并根据你所有的网友和查看的页面来决定向你显示什么内容。

Should you be shown another baby picture?  你应该再给我看一张婴儿照片吗?

A sullen note from an acquaintance?  一个熟人的闷闷不乐的便条?

An important but difficult news item? 一个重要但困难的新闻?

There's no right answer.  没有正确的答案。

Facebook optimizes for engagement on the site: likes, shares, comments. 【跟读】 Facebook优化了对网站的参与:喜欢,分享,评论。


In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances.  2014年8月,在密苏里州的弗格森,在一名白人警官在黑暗的环境下杀害一名非洲裔美国少年后,民众发动了抗议活动。

【词义】  murky adj. 黑暗的;朦胧的;阴郁的   If sth. is murky, it ... is not clearly understood.

The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook.    抗议的消息充斥了我的算法未过滤的Twitter feed,但在我的Facebook上却没有。

Was it my Facebook friends?    是因为我的Facebook朋友吗?

I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it.   【填空】 我禁用了Facebook的算法,这很难,因为Facebook一直想让你在算法的控制下,看到朋友们在谈论它。

【跟读】Facebook keeps wanting to make you come under the algorithm's control.

It's just that the algorithm wasn't showing it to me.  只是算法没有显示给我看。

I researched this and found this was a widespread problem.  我研究了这一点,发现这是一个普遍的问题。

【选择】-What is a possible danger of using an algorithm to filter news?   -Important social issues could be ignored.

【选择】-Why did Facebook's algorithm filter the news of Ferguson's protests?    -It wasn't likely to have high user engagement.


The story of Ferguson wasn't algorithm-friendly.  弗格森的故事不太友好。

It's not "likable." Who's going to click on "like?" It's not even easy to comment on.  这不是“可爱”。谁会点击“喜欢”?这甚至不容易评论。

Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this.  【填空】 没有喜欢和评论,算法可能会显示它更少的人,所以我们没有看到这一事件。

Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge.    相反,在那个星期,Facebook的算法突出了这一点,这是ALS冰桶挑战。

Worthy cause; dump ice water, donate to charity, fine.  有价值的事业;倾倒冰水,收入捐给慈善机构,很好。

But it was super algorithm-friendly.  但它是超级算法层面的友好。

The machine made this decision for us.  机器为我们做了这个决定。

A very important but difficult conversation might have been smothered, had Facebook been the only channel.  一个非常重要但艰难的谈话可能被扼杀了,Facebook是唯一的渠道。




Now, finally, these systems can also be wrong in ways that don't resemble human systems.  现在,最后,这些系统也可能是错误的方式,不象人类系统。

Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy?  你们还记得Watson,在《危险边缘》中和人类选手擦地的IBM机器智能系统吗?

2004年:IBM超级电脑沃特森(Watson)是“深蓝”机器人的继任者,它曾于2008年登上媒体头条,当时它在智力竞赛《Jeopardy!》中击败了人类。这种比赛要求机器人拥有理解自然语言的复杂能力。

Watson图标    来源:百度百科

2011年,沃特森击败了人类冠军肯·詹金斯(Ken Jennings)和布拉德·鲁特(Brad Rutter)。

It was a great player.  这是一个伟大的系统。

But then, for Final Jeopardy, Watson was asked this question: "Its largest airport is named for a World War II hero, its second-largest for a World War II battle."  但是,最后的节目中,沃特森被问到这个问题:“一座城市拥有两座机场,最大的一个机场以二战英雄命名,第二大机场以二战一场战役命名。”


Chicago. The two humans got it right.  芝加哥。这两个人是对的。

Watson, on the other hand, answered "Toronto" -- for a US city category!  另一方面,沃特森回答“多伦多”-为美国城市类别!

The impressive system also made an error that a human would never make, a second-grader wouldn't make. 【跟读】令人印象深刻的系统也犯了一个错误,一个人类永远不会做出,一个二年级学生不会发生的错误。

Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for.   【填空】  我们的机器智能会以不符合人类错误模式的方式失败,而是以我们无法预料和准备的方式。

It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine.    如果没有一份工作是合格的,那就太糟糕了,但如果是因为某些子程序中的堆栈溢出,那么它将是三倍糟糕。


In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's "sell" algorithm wiped a trillion dollars of value in 36 minutes.  2010年5月,华尔街“卖出”算法的反馈回路引发的华尔街闪电崩盘,在36分钟内抹去了1万亿美元的价值。

I don't even want to think what "error" means in the context of lethal autonomous weapons.  我甚至不想在致命的自主武器的背景下思考“错误”的含义。

【选择】-Why is streets Wall Street's algorithmer a cause for concern?  - Algorithm errors can have serious consequences.   -为什么华尔街的算法值得关注?-算法错误会产生严重的后果。


So yes, humans have always made biases.  是的,人类总是有偏见。

Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. 决策者和看门人,在法庭上,在新闻里,在战争中……他们犯了错误,但这正是我的观点。

We cannot escape these difficult questions. 我们无法逃避这些难题。

We cannot outsource our responsibilities to machines.  【跟读】    我们不能把我们的责任外包给机器。

【词汇】

outsource   vt. 把…外包   vi. 外包   To outsouce sth. means to make a third party do the work for it.


Artificial intelligence does not give us a "Get out of ethics free" card.  【跟读】  人工智能并没有给我们一个“摆脱道德自由”的卡片。

【选择】-What does Tufakci mean by "artificial intelligence does not give us a Get out of ethics free card"?  - Decisions made by AI don't free people from moral responsibilities.




Data scientist Fred Benenson calls this math-washing. 数据科学家弗雷德本尼森称这是数学清洗。

We need the opposite. 我们需要相反的东西。

We need to cultivate algorithm suspicion, scrutiny and investigation. 我们需要培养算法的怀疑、审查和调查。

We need to make sure we have algorithmic accountability, auditing and meaningful transparency. 【跟读】我们需要确保我们有算法问责、审计和有意义的透明度。

We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity;  【填空】   我们需要承认,把数学和计算带入混乱、充满价值的人类事务并不会带来客观性;

rather, the complexity of human affairs invades the algorithms.   【填空】 相反,人类事务的复杂性会侵入算法。

Yes, we can and we should use computation to help us make better decisions.  是的,我们可以并且我们应该使用计算来帮助我们做出更好的决定。

But we have to own up to our moral responsibility to judgment, and use algorithms within that framework,    但我们必须承认我们的道德责任,并在这个框架内使用算法,

not as a means to abdicate and outsource our responsibilities to one another as human to human.  而不是作为一种手段来放弃和外包我们的责任,作为人类对人类的责任。

【词义】To abidcate responsibility means to...    fail or refuse to perform a duty.

Machine intelligence is here.  机器智能在这里。

That means we must hold on ever tighter to human values and human ethics.  这意味着我们必须更严格地坚持人类价值观和人类伦理。

Thank you. 谢谢。

【选择】-What does Tufakci end her presentation?    -by emphasizing the importance of human values and ethics.   Tufakci如何结束她的演讲?——强调人的价值和道德的重要性。

你可能感兴趣的:(【学习笔记】懂你英语 核心课 Level 7 Unit 2 Part 3(IV)On Machine Intelligence 4)