5

科技

我们被人工智能歧视了吗?

毛西2018-06-09

毛西

剑桥大学二语教育硕士

英语戏剧导演

今日导读

人工智能和算法都是近年来大热的科技词,而人工智能算法偏见是一个更新的词汇。它代表着每天在用各种互联网产品的时候,你也许已经无意间被算法歧视了。算法的本质是自动化地作出决策,那人工智能做的决策就能绝对公平吗?微软最近开发了一个新工具来甄别人工智能算法中的偏见,我们来一起看今天的新闻来了解一下。

带着问题听讲解

Q1: 人工智能算法偏见是怎样产生的?

Q2: “在群里潜水”用英文怎么表达?

Q3: 文中教授对侦查算法偏见的工具有怎样的建议?

新闻正文

Microsoft is creating an oracle for catching biased AI algorithms

微软正在创建一个“先知”工具来捕捉 AI 算法偏见

Microsoft is building a tool to automatically identify bias in a range of different AI algorithms. It is the boldest effort yet to automate the detection of unfairness that may creep into machine learning—and it could help businesses make use of AI without inadvertently discriminating against certain people.

微软正在开发一个工具来自动识别一系列不同 AI 算法中的偏见。这在自动侦查可能渗入机器学习的不平等方面,是迄今为止最大胆的尝试,且它还能帮助企业在利用人工智能的同时防止无意间歧视某些群体。

Algorithmic bias is a growing concern for many researchers and technology experts. As algorithms are used to automate important decisions, there is a risk that bias could become automated, deployed at scale, and more difficult for the victims to spot.

算法偏见是研究人员和技术专家日益担忧的问题,由于算法将做重要决定的过程自动化,因此存在着这样的风险:偏见也可能自动化和大规模地产生作用,并且让受害者更难以察觉。

“Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models,” says Rich Caruna, a senior researcher at Microsoft who is working on the bias-detection dashboard.

从事偏见检测控制面板研究的微软高级研究员里奇·卡鲁纳说:“透明度、可理解性和解释这类东西对这个领域来说是全新的,我们很少有人有足够的经验来明白我们该寻找的东西,以及偏见可能潜伏在我们模型中的所有方式。”

Facebook announced its own tool for detecting bias at its annual developer conference on May 2. Its tool, called Fairness Flow, automatically warns if an algorithm is making an unfair judgement about someone based on his or her race, gender, or age.

脸书在 5 月 2 日的年度开发者大会上宣布了其自己的偏见检测工具。这个工具命名为“公平畅行”,如果算法根据一个人的种族、性别或年龄做出了不公平的判断,它就会自动发出警告。

Bin Yu, a professor at UC Berkeley, says the tools from Facebook and Microsoft seem like a step in the right direction, but may not be enough. She suggests that big companies should have outside experts audit their algorithms in order to prove they are not biased. “Someone else has to investigate Facebook's algorithms—they can't be a secret to everyone,” Yu says.

加州大学伯克利分校教授郁彬表示,来自脸书和微软的工具似乎是朝着正确方向迈出的一步,但可能还不够。她建议大公司应该让外部专家审核他们的算法,以证明它们没有偏见。郁彬说:“其他人也得调查脸书的算法。它们不应对所有人都保密。”

—————  文章来源 / MIT Technology Review

重点词汇

oracle /ˈɔːrəkl/

n. 神谕者;专家

e.g.

He is regarded as the oracle on health.

bias/ˈbaɪəs/

n. 偏见 (biased adj.)

e.g.

racially biased attitudes

algorithm /ˈælɡərɪðəm/

n. (计算机)算法

automate /ˈɔːtəmeɪt/

v. 使自动化 (automatic adj.; automatically adv.)

e.g.

ATM machines automate deposits and withdrawals.

Automatic Teller Machine

inadvertently /ˌɪnədˈvɜːrtəntli/

adv. 无意地 (advertent adj.)

deploy /dɪˈplɔɪ/

v. 产生作用;发挥...的作用

e.g.

A company should deploy its resources and let its employees deploy their skills.

spot /spɑːt/

v. 看见;注意到

e.g.

spot mistakes

well spotted

transparency /trænsˈpærənsi/

n. 透明度 (transparent adj.)

intelligibility /ɪnˌtelɪdʒəˈbɪləti/

n. 可理解性 (intelligible adj.)

e.g.

I was so upset when I spoke that I was hardly intelligible.

lurk /lɜːrk/

v. 潜伏

e.g.

lurk in chatrooms

flow /floʊ/

v. (畅通无阻地)流动

e.g.

Traffic is flowing more smoothly than usual.

dashboard /ˈdæʃbɔːrd/

n. 仪表盘;控制面板

audit /ˈɔːdɪt/

v. 审核;审计

creep into/in

偷偷潜入;潜移默化地影响;不知不觉地存在

e.g.

creep into a party

Don't let doubt creep into your mind.

at scale

一定规模地

developer conference

开发者大会

a step in the right direction

朝正确方向迈进一步

e.g.

That India's cabinet approved the death penalty for rapists is a step in the right direction.

拓展内容

大数据杀熟

在这个互联网社会,“大数据杀熟”时刻都存在于我们身边。它指的是互联网厂商利用自己所拥有的用户数据,对具有某种特质的用户实行价格歧视行为。这样的特质体现在对价格是否敏感,对于产品的需求是否强烈,是否依赖该产品,购买力如何等等,一旦符合,用户就会被显示更高的价格,而你不得不买,互联网厂商由此获得利润最大化。

在大数据时代,便利往往带来透明。每个人在互联网上的行为都有迹可循,而互联网企业根据用户的个人资料、流量轨迹、购买习惯等行为信息建立用户画像,再以此实现相应产品推荐的行为早已存在。但是“大数据杀熟”已经构成了价格歧视,是一个急需政府监管和企业自律的社会问题。

A clip from TED Talk: How I'm fighting bias in algorithms

 

你可能感兴趣的:(5)