生活中的观察者偏见例子_消除人工智能第2部分中的偏见,解决性别和种族偏见...

生活中的观察者偏见例子

Chatbots that become racist in less than a day, facial technology that fails to recognize users with darker skin colors, ad-serving algorithms that discriminate by gender and race, an AI hate speech detector that’s racially biased itself. Flawed artificial intelligence systems perpetuate biases, which can be largely attributed to the lack of diversity within the field itself, according to a report published by the AI Now Institute.

聊天机器人在不到一天的时间内就成为了种族主义者,面部技术无法识别肤色较深的用户,广告服务算法会根据性别和种族进行区分,这是一种AI仇恨的语音检测器,种族歧视自己。 根据AI Now Institute发布的报告 ,有缺陷的人工智能系统会使偏差长期存在,这可能主要归因于该领域本身缺乏多样性。

In part 1 of this article, we covered first steps to removing bias in AI, including recognizing our own biases, building diverse teams, implementing harm reduction in the design and development process, and using tools to measure and mitigate risks. In part 2 we’ll explore gender and racial bias in particular, which AI often replicates, gain insights and practical tips to start reducing bias in AI experiences based on hands-on research, and explore a real-world project that has been created to end gender bias in AI assistants.

在本文的第1部分中 ,我们介绍了消除AI偏见的第一步,包括认识到我们自己的偏见,组建多元化的团队,在设计和开发过程中实施减少伤害以及使用工具来衡量和减轻风险。 在第2部分中,我们将特别探讨性别和种族偏见,这是AI经常复制的,获得见识和实用技巧,以便根据动手研究开始减少AI体验中的偏见,并探索一个现实世界的项目,旨在消除人工智能助手中的性别偏见。

关于人工智能和性别偏见的批判性讨论 (Critical Discussion of AI and gender bias)

More than 100 million devices with Amazon’s Alexa assistant built in had been sold by January of 2019. Given Alexa’s scale, UX designer and creative strategist Evie Cheung was curious about the embedded gender and racial biases within the product. So she examined them by facilitating a co-creation workshop.

到2019年1月,已经内置了超过1亿台内置有Amazon的Alexa助手的设备。鉴于Alexa的规模,UX设计师和创意策略师Evie Cheung对产品中嵌入的性别和种族偏见感到好奇。 因此,她通过举办联合创作研讨会对它们进行了检查。

“The participants listened to Alexa’s voice telling a story and were instructed to draw what Alexa would look like as a human being,” she explains. “They were then asked questions about Alexa’s race, political beliefs, and hobbies.”

她解释说:“参与者听了Alexa讲故事的声音,并被要求画出Alexa作为人的样子。” “然后他们被问到有关Alexa的种族,政治信仰和爱好的问题。”

The view that emerged was one of Alexa as a subservient white woman who couldn’t think for herself, apologized for everything, and was pushing a libertarian agenda.

出现的观点是,Alexa是一位屈从于世的白人妇女,她自己不能为自己想,为所有事情道歉,并推动了自由主义者的议程。

“In a vacuum, this is hilarious,” Cheung points out. “But children are now growing up with a device they’re able to order around, due to Alexa’s submissive personality and conversation design. Alexa’s ubiquity means that it has become a socializing force, influencing a child’s mental model on how they perceive female-sounding voices, and establishing a ‘norm’ for how technology is supposed to sound — in this case, female and inferior.”

“在真空中,这很有趣。”张指出。 “但是,由于Alexa柔顺的性格和对话设计,孩子们现在已经长大了可以订购的设备。 Alexa的普遍存在意味着它已成为一种社交力量,影响了孩子关于如何感知女性发音的心理模型,并为技术的发音(在这种情况下,女性和劣等)建立了“规范”。

As designers, Cheung advises, we must be hyper-aware of perpetuating existing societal gender biases and predict how products may have detrimental impacts for future generations. To combat these biases, it’s imperative to diversify teams of designers and technologists (for more on diverse teams, see Part 1), as well as the groups of users that products are tested on.

张忠谋建议,作为设计师,我们必须高度意识到使现有的社会性别偏见永存,并预测产品如何对子孙后代产生不利影响。 为了克服这些偏见,必须使设计师和技术人员的团队多样化(有关不同团队的更多信息,请参见第1部分 ),以及对其进行测试的用户组。

For more on Cheung’s research, check out her graduate thesis book Alexa, Help Me Be a Better Human: Redesigning Artificial Intelligence for Emotional Connection, based on a year-long investigation of AI as a tool to explore human psychology.

有关Cheung研究的更多信息,请查看她的研究生论文Alexa,“帮助我成为一个更好的人类:重新设计人工智能以建立情感联系” ,该研究基于对AI作为探索人类心理学工具的为期一年的调查。

生活中的观察者偏见例子_消除人工智能第2部分中的偏见,解决性别和种族偏见..._第1张图片
In Evie Cheung’s workshop thirteen professionals from across seven industries gathered to discuss the future of artificial intelligence. 在张敬轩的工作坊中,来自七个行业的十三位专业人士聚集一堂,讨论人工智能的未来。

语音AI的第一个无性别语音 (The first genderless voice for voice AI)

Digital voice assistants often have two options for the gender the user prefers interacting with: male or female. Sometimes the default will be set differently to adapt to the culture the user is in. For example, in the U.S. Siri is female, while in the UK Siri has a male voice.

对于用户更喜欢与之交互的性别,数字语音助手通常有两种选择:男性或女性。 有时,默认设置会有所不同,以适应用户所处的文化。例如,在美国,Siri是女性,而在英国,Siri是男性。

“If you ask folks at Microsoft, Amazon, or Google, why so many of our voice assistants are female,” explains David Dylan Thomas, “they’ll tell you that according to their research, people are more comfortable hearing certain kinds of assistance or information from women than from men. On the one hand that seems like a good answer because we all live in the world of user experience, and we always say follow the research, but we also have to ask ourselves if we are okay with what the research is telling us. Is it a good thing that people are preferring to hear certain types of information from women, limiting how people view women? Are we okay with that and do we want to perpetuate it?”

“如果您问微软,亚马逊或Google的员工,为什么我们这么多的语音助手是女性,”戴维·迪伦·托马斯(David Dylan Thomas)解释说,“根据他们的研究,他们会告诉您,人们更愿意听到某些类型的帮助或女性提供的信息要多于男性。 一方面,这似乎是一个很好的答案,因为我们所有人都生活在用户体验的世界中,并且我们总是说要跟随研究,但我们还必须问自己,我们是否对研究告诉我们的观点感到满意。 人们喜欢从女性那里听到某些类型的信息,从而限制了人们对女性的看法,这是一件好事吗? 我们可以接受吗?我们想永久保留吗?”

A lot of the experts David talked to said you should leave it up to the user to decide if they want to hear a male or a female voice. Emil Asmussen, however, creative director of VICE Media’s agency Virtue, cautions that binary choice isn’t an accurate representation of the complexities of gender.

David与许多专家交谈过,您应该让用户自己决定是想听男声还是女声。 但是,VICE Media机构Virtue的创意总监Emil Asmussen告诫说,二元选择并不是性别复杂性的准确代表。

“Some people don’t identify as either male or female, and they may want their voice assistant to mirror that identity,” he explains. “As third gender options are being recognized across the globe, it feels stagnant that technology is still stuck in the past only providing two binary options.

他解释说:“有些人不认同是男性还是女性,他们可能希望语音助手反映这种身份。” “随着第三性别选项在全球范围内得到认可,人们感到停滞不前,技术仍然停留在过去,仅提供了两种二进制选项。

That’s why we created Q, the world’s first genderless voice for voice AI. Created for a future where we are no longer defined by gender.”

因此,我们创建了Q,这是世界上第一个用于语音AI的无性别语音 。 为不再由性别定义的未来而创建。”

“The project is confronting a new digital universe fraught with problems. It’s no accident that Siri, Cortana, and Alexa all have female voices — research shows that users react more positively to them than they would to a male voice. But as designers make that choice, they run the risk of reinforcing gender stereotypes, that female AI assistants should be helpful and caring, while machines like security robots should have a male voice to telegraph authority. With Q, the thinking goes, we can not only make technology more inclusive but also use that technology to spark conversation on social issues.”

“该项目正面临一个充满问题的新数字世界。 Siri,Cortana和Alexa都有女性声音绝非偶然-研究表明,用户对它们的React比对男性声音的React更积极 。 但是,当设计师做出选择时,他们冒着增强性别定型观念的风险,女性AI助手应该提供帮助和关怀,而像安全机器人这样的机器应该具有男性的声音来传达权威。 人们认为,有了Q,我们不仅可以使技术更具包容性,而且可以使用该技术引发有关社会问题的对话。”

演示地址

消除根深蒂固的种族偏见并欺骗AI (Counteract ingrained racial bias and lie to AI)

Informed by her conversations with over 30 machine learning engineers, creative technologists, and diversity and inclusion thought leaders, Evie Cheung has found that one of the most salient and urgent AI issues is biased algorithms — particularly around the topic of race.

通过与30多位机器学习工程师,创意技术人员以及多样性和包容性思想领袖的对话,张爱玲发现,最突出,最紧迫的AI问题之一是算法有偏差,尤其是围绕种族这一话题。

“We are still living through the consequences of colonialism, in which the western hegemony violently established power over the rest of the world,” Cheung explains. “These racial biases are thoroughly ingrained in society, and have the potential to be exacerbated by algorithms, such as in the criminal justice system. Significant problems include the lack of unbiased historical data, an unbalanced workforce, and limited user testing. These factors result in products like Facebook’s racist soap dispenser and Google’s image recognition algorithm that classified black folks as gorillas.”

张说:“我们仍然生活在殖民主义的后果中,西方霸权在殖民主义的后果中暴力地建立了世界其他地方的权力。” 这些种族偏见在社会中已根深蒂固,并有可能被诸如刑事司法系统之类的算法所加剧。 重大问题包括缺乏公正的历史数据,劳动力不平衡以及用户测试受限。 这些因素导致产生了诸如Facebook的种族主义皂液器和Google的图像识别算法之类的产品,该算法将黑人归为大猩猩 。”

Cheung says that we need to acknowledge the glaring truth: history is racist because humans are racist. And thus, algorithms powered by that historical data will also be racist.

张说,我们需要承认一个显而易见的事实:历史是种族主义的,因为人类是种族主义的。 因此,由该历史数据提供支持的算法也将是种族主义的。

“In the creation of AI algorithms, products, and services, designing equally for all groups is not good enough,” Cheung points out. “We need to include diverse voices who aren’t traditionally included in conversations about rising technologies. We also need to make sure that the data sets used are representative of the population that the respective algorithm will be used for. “

“指出,在创建AI算法,产品和服务时,对所有团队进行均等的设计是不够的,” Cheung指出。 “我们需要包括在新兴技术的对话中传统上不包含的各种声音。 我们还需要确保所使用的数据集代表相应算法将用于的总体。 “

David Dylan Thomas agrees that any bias in AI comes from its creators. “Often these creators will try to de-bias their AI by pointing it at ‘the real world’,” he explains. “They’ll use data sets to train the AI that are based on real-world statistics. This may seem like a logical approach, but what if those data sets represent a racist world? If you were to ask an AI who is most likely to own a home based on current statistics it will tell you ‘a white family’. If you were to ask an AI who is most likely to go to jail based on current statistics it will tell you ‘a black man’. It’s very easy to turn that into recommendations for who should own a home or go to jail — it’s happened before.”

戴维·迪伦·托马斯(David Dylan Thomas)同意,人工智能的任何偏见都来自其创造者。 他解释说:“这些创作者通常会通过将AI指向“现实世界”来尝试消除其AI偏差。 “他们将使用基于实际统计数据的数据集来训练AI。 这似乎是合乎逻辑的方法,但是如果这些数据集代表种族主义世界该怎么办? 如果您根据当前的统计数据询问最有可能拥有房屋的AI,它将告诉您“白人家庭”。 如果您要根据目前的统计数据询问最有可能入狱的AI,它将告诉您“黑人”。 很容易将其转变为有关谁应该拥有房屋或入狱的建议- 这是以前发生的事情 。”

David suggests we need to start looking at the world we want and not the world we have when creating these data sets.

David建议我们在创建这些数据集时需要着眼于我们想要的世界,而不是我们拥有的世界。

“We have to lie to AI. Give it data sets that favor equity. That overrepresent for the underrepresented. If we don’t, we risk scaling the bias that already exists.”

“我们必须对人工智能撒谎。 给它提供有利于公平的数据集。 代表人数不足的代表人数过多。 如果我们不这样做,就有冒险扩大已经存在的偏见。”

演示地址

减少AI体验偏见的四个步骤 (Four steps to reducing bias in AI experiences)

Content strategist and co-founder of Rasa Advising, Julie Polk, currently a content lead for AI applications at IBM, has come up with four essential tips you should keep in mind to combat bias in AI:

内容战略家和Rasa Advising的联合创始人朱莉·波尔克(Julie Polk)目前是IBM AI应用程序的内容负责人,他提出了四个基本技巧,您应该牢记这些技巧来消除AI的偏见:

  • It’s not enough to edit your final results. No matter how many images or phrases or search results you eliminate in one instance, they’ll show up again unless you address the underlying bias that produced them. It’s like whack-a-mole without the weird furry carnival prizes.

    仅编辑最终结果还不够。 无论您一次消除了多少图像,短语或搜索结果,它们都会再次显示,除非您解决产生它们的潜在偏见。 这就像没有怪异的毛茸茸的嘉年华奖的w鼠。

  • Require gender-neutral language in your style guide. Institutionalize words and phrases like “Hi everyone,” instead of “Hi guys,” “Chair” instead of “Chairman,” or “first-year” instead of “freshman.” I’ve been doing this work for ten years, and I’m still amazed at how pervasive and deeply embedded the assumption of male-as-neutral is. These seem like small changes, but taken together, they shift the entire context of our cultural conversation.

    在您的风格指南中要求使用不分性别的语言。 将“大家好”而不是“大家好”,“主席”而不是“主席”或“第一年”而不是“新鲜人”这样的单词和短语制度化。 我从事这项工作已经十年了,但我仍然对男性为中立性假设的普遍性和深刻性感到惊讶。 这些看似很小的变化,但综合起来,它们改变了我们文化对话的整个环境。

  • Vet your data. Garbage in, garbage out, always and forever. So dig around into how your data was generated before you build on it. If it’s research, who conducted it? Why? Who funded it? Who were the subjects? How were they chosen? What was the sample size? If it’s historical data, who does it include? More importantly, who does it exclude?

    审核您的数据。 垃圾永远进入,垃圾永远进入。 因此,在构建数据之前,请先深入了解如何生成数据。 如果是研究,是谁进行的? 为什么? 谁资助的? 谁是主题? 他们是如何选择的? 样本量是多少? 如果是历史数据,它包括谁? 更重要的是, 它排除了谁 ?

  • Don’t get sucked into solutions at the expense of inclusion. The speed and power of AI are seductive; anyone with a laptop, a skill set, and a creative mind can change how we live almost overnight. But nothing — so far, at least — can replace the human ability to understand the nuances of…well, of being human. And the biggest, shiniest solution, no matter how well-intentioned, isn’t a solution at all if it leaves damage in its wake.

    不要以牺牲包容为代价而陷入解决方案中。 AI的速度和力量令人着迷。 任何拥有笔记本电脑,技能和创新思维的人都可以改变我们几乎一夜之间的生活。 但是,至少到目前为止,没有什么能取代人类理解……好吧,成为人类的细微差别的能力。 无论意图多么好,最大,最光亮的解决方案根本不会解决任何问题。

自我调节以减少对消费者的伤害 (Self-regulate to reduce consumer harm)

Removing bias in AI and preventing it from widening the gender and race gap is a monumental challenge but it’s not impossible. From the Algorithmic Justice League to the first genderless voice for virtual assistants, there are many excellent projects that have the common goal of making AI fairer and less biased. But we need to work together, and if we include AI in a digital product, it’s every stakeholder’s responsibility to ensure it doesn’t discriminate or harm people. As Evie Chung says, “We must stay vigilant about the unintended consequences of the design decisions we make in AI-powered products.” Only then will we be able to maximize AI’s true potential to transform our lives.

消除人工智能中的偏见并防止其扩大性别和种族差距是一个巨大的挑战,但这并非不可能。 从算法正义联盟到虚拟助手的第一个无性别的声音,有许多出色的项目,其共同目标是使AI更加公平并且减少偏见。 但是我们需要共同努力,如果我们将AI包含在数字产品中,那么确保不歧视或伤害人们是每个利益相关者的责任。 正如埃维·钟(Evie Chung)所说,“我们必须保持警惕,我们在以人工智能为动力的产品中做出的设计决策会带来意想不到的后果。” 只有这样,我们才能最大限度地发挥AI改变生活的真正潜力。

For more unique insights and authentic points of view on the practice, business and impact of design, visit Adobe XD Ideas.

有关设计的实践,业务和影响的更多独特见解和真实观点,请访问Adobe XD Ideas

To learn about Adobe XD, our all-in-one design and prototyping tool:

要了解Adobe XD,我们的多合一设计和原型制作工具:

  • Download Adobe XD

    下载Adobe XD

  • Adobe XD Twitter account — also use #adobexd to talk to the team!

    Adobe XD Twitter帐户 -也使用#adobexd与团队交谈!

  • Adobe XD UserVoice ideas database

    Adobe XD UserVoice 创意数据库

  • Adobe XD forum

    Adobe XD论坛

Originally published at https://xd.adobe.com.

最初发布在 https://xd.adobe.com上

翻译自: https://medium.com/thinking-design/removing-bias-in-ai-part-2-tackling-gender-and-racial-bias-1763457fbea5

生活中的观察者偏见例子

你可能感兴趣的:(人工智能,python,设计模式,算法)