ai外呼机器人设计流程_消除AI第1部分不同团队中的偏见并重新定义设计流程

ai外呼机器人设计流程

There have been some incredible advances in artificial intelligence and machine learning in the last few years, and AI is increasingly making its way into mainstream product design. However, from an over-reliance on female voices in virtual assistants, such as Amazon’s Alexa and Apple’s Siri, to policing software that predicts crimes to happen in neighborhoods that are poor, and facial recognition software that only really works for white people, there have also been some very concerning issues around bias in AI. The training data crawled by learning algorithms, it turns out, is flawed because it’s full of human biases.

在过去的几年中,人工智能和机器学习领域取得了令人难以置信的进步,并且人工智能正越来越多地进入主流产品设计。 但是,从过分依赖虚拟助手(例如亚马逊的Alexa和苹果的Siri)中的女性声音,到可以预测犯罪发生在贫困地区的警务软件,以及仅适用于白人的面部识别软件, 关于AI偏见的问题也很令人关注 。 事实证明,通过学习算法爬取的训练数据是有缺陷的,因为它充满了人为的偏见。

“It’s not the intelligence itself that’s biased, the AI is really just doing what it’s told,” explains content strategist David Dylan Thomas. “The problem is usually that it’s biased human beings who are providing the data the AI has to work with. The AI is just a prediction machine. It frankly makes better predictions than a human could based on the data it’s being given.”

内容战略家戴维·迪伦·托马斯(David Dylan Thomas)解释说:“并不是智力本身有偏见,人工智能实际上只是在按照所告诉的做。” “问题通常是有偏见的人提供AI必须使用的数据。 AI只是一个预测机器。 坦率地说,基于所提供的数据,它比人类可以做出更好的预测。”

Market and UX research consultant Lauren Isaacson agrees and says that we need to take greater care of what we feed to the robots: “AI is no smarter than the data sets it learns from. If a system is biased against certain people, the data resulting from the system will be no less biased.”

市场和用户体验研究顾问Lauren Isaacson表示同意,并说我们需要更加小心我们为机器人提供的东西:“人工智能并不比从其学习的数据集更聪明。 如果一个系统对某些人有偏见,那么来自系统的数据就会受到同样的偏见。”

So whether we use machine learning algorithms that are based on training data or hard-code the language of digital assistants ourselves, designers bear a great responsibility in the creation of AI-powered products and services.

因此,无论我们使用基于训练数据的机器学习算法还是自己对数字助理的语言进行硬编码,设计人员在创建AI驱动的产品和服务时都负有重大责任。

In this two-part article, we explore the challenge and hear from UX designers, user researchers, data scientists, content strategists, and creative directors to find out what we can do to reduce bias in AI.

在这个由两部分组成的文章中,我们探讨了挑战,并听取了UX设计师,用户研究人员,数据科学家,内容策略师和创意总监的意见,以了解我们可以采取哪些措施来减少AI的偏见。

教育和检查自己 (Educate and check yourself)

The first step to removing bias is to proactively look out for it and keep checking your own behavior, as a lot of bias is unconscious.

消除偏见的第一步是主动寻找它,并继续检查自己的行为,因为许多偏见是无意识的。

“We cannot build equitable products — and products that are free of bias — if we do not acknowledge, confront, and adjust the systemic biases that are baked into our everyday existence,” explains Alana Washington, a strategy director on the Data Experience Design team at Capital One, who’s also leading a ‘Fairness in AI’ initiative for the design practice at the company. “The ‘problems’ that we look to solve with technology are the problems of the world as we know it. A world that, at the moment, is stacked to benefit some, and prey upon others.”

“如果我们不承认,面对和调整我们日常生活中存在的系统性偏见,我们将无法制造公平的产品以及没有偏见的产品,”数据体验设计团队的战略总监Alana Washington解释说。在Capital One任职,他还领导公司的“ AI公平”计划的设计实践。 “我们希望用技术解决的'问题'是我们所知道的世界上的问题。 目前,这个世界已经堆叠起来,造福了某些人,并捕食了其他人。”

To change this, Washington recommends expanding our understanding of systemic injustice, considering how marketing narratives have been sublimated into our collective belief system, and actively listening to as many diverse perspectives as possible.

为了改变这种状况,华盛顿建议扩大我们对系统性不公正的理解,考虑如何将营销叙事升华到我们的集体信仰体系中,并积极听取尽可能多的不同观点。

“We must shift from an engineering disposition, building solutions to ‘obvious’ problems, to a design disposition — one that relentlessly considers if we’ve correctly articulated the problem we’re solving for. Joy Buolamwini puts it best.”

“我们必须从工程解决方案,为“显而易见”的问题构建解决方案转变为设计方案,这是我们不懈地考虑是否正确表达了我们要解决的问题的方案。 Joy Buolamwini说得最好。

“Why we code matters, who codes matters, and how we code matters.” — Joy Buolamwini

“我们为什么要编码,谁要编码以及我们如何编码很重要。” — Joy Buolamwini

建立多元化的团队 (Build a diverse team)

As the AI field is overwhelmingly white and male, another way to reduce the risk of bias and to create more inclusive experiences is to ensure the team building the AI system is diverse (for example, with regard to gender, race, education, thinking process, disability status, skill set and problem framing approach). This should include the engineer teams, as well as project and middle management, and design teams.

由于AI领域绝大多数是白人和男性,因此减少偏见风险和创造更多包容性体验的另一种方法是,确保AI体系的团队建设多样化(例如,在性别,种族,教育,思考过程方面) ,残障状态,技能和问题框架方法)。 这应该包括工程师团队,项目和中层管理人员以及设计团队。

“Racial and gender diversity in your team isn’t just for show — the more perspectives on your team, the more likely you are to catch unintentional biases along the way,” advises Cheryl Platz, author of the upcoming book Design Beyond Devices and owner of design consultancy Ideaplatz. “And beyond biases, diversity on your team will also lend you a better eye towards potential harm. No one understands the crushing impact of racial bias as well as those who have lived it every day.”

“团队中的种族和性别多样性不仅仅是为了展示-团队中的观点越多,您越有可能在此过程中遇到无意的偏见,”即将出版的《 超越设备的设计》一书的作者和所有者Cheryl Platz建议说。设计咨询公司Ideaplatz的成员 。 “除了偏见,团队的多元化也将使您更好地了解潜在的伤害。 没有人知道种族偏见以及每天生活的偏见所带来的严重影响。”

Carol Smith, senior research scientist in Human-Machine Interaction at Carnegie Mellon University’s Software Engineering Institute, agrees that diverse teams are necessary because their different personal experiences will have informed different perceptions of trust, safety, privacy, freedom, and other important issues that need to be considered with AI systems.

卡内基梅隆大学软件工程研究所人机交互高级研究科学家卡罗尔·史密斯 ( Carol Smith)认为,多元化的团队是必要的,因为他们不同的个人经历将使人们对信任,安全,隐私,自由以及其他需要解决的重要问题产生不同的见解。与AI系统一起考虑。

“A person of color’s experience with racism is likely very different from my experience as a white woman, for example, and they are likely to envision negative scenarios with regard to racism in the AI system that I would miss,” she points out.

她指出:“例如,一个有色人种在种族主义方面的经历可能与我作为白人妇女的经历截然不同,而且他们很可能会想到我想念的关于AI系统中种族主义的消极情景。”

重新定义您的流程以减少伤害 (Redefine your process to reduce harm)

Having diverse teams also helps when you start implementing harm reduction in the design process, explains machine learning designer, user researcher and artist Caroline Sinders.

机器学习设计师,用户研究人员和艺术家Caroline Sinders解释说,拥有多元化的团队还可以帮助您在设计过程中开始减少伤害。

“We should have diverse teams designing AI in consumer products,” she says, “so when we start to think about harm, or how a product can harm and go wrong, we aren’t designing from a white, male perspective.”

她说:“我们应该有不同的团队来设计消费产品中的AI,因此,当我们开始考虑伤害或产品如何造成伤害或出错时,我们并不是从男性的角度出发进行设计。”

While team diversity is crucial, you’ll never be able to hire a group of people that completely represent the lived experiences out there in the world. Bias is inevitable, and Cheryl Platz therefore advises that you must also redefine your process to minimize the potential harm caused by your AI-powered system, and develop proactive plans that let you respond to issues and learn from input as fast as possible.

尽管团队的多样性至关重要,但您永远无法聘请能够完全代表世界各地生活经验的人员。 偏见是不可避免的,因此,Cheryl Platz建议您还必须重新定义流程,以最大程度地减少由AI驱动的系统所造成的潜在危害,并制定积极的计划,以使您能够尽快响应问题并从中学习。

She calls this new mindset “opti-pessimism“: be optimistic about the potential success of your system, but fully explore the negative consequences of that success.

她称这种新思维“ OPTI-悲观 “:对你的系统的潜在成功持乐观态度,但完全探索成功的消极后果。

Carol Smith advises that the team needs to be given time and agency to identify the full range of potential harmful and malicious uses of the AI system.

卡罗尔·史密斯(Carol Smith)建议,团队需要时间和精力来确定AI系统的所有潜在有害和恶意用途。

“This can be time consuming,” she admits, “but is extremely important work to identify and reduce inherent bias and unintended consequences. By speculating about harmful and malicious use, racist and sexist scenarios are likely to be identified, and then preventative measures and mitigation plans can be made.”

她承认:“这可能很耗时,但是对于识别和减少内在的偏见和意想不到的后果是极为重要的工作。 通过推测有害和恶意使用,很可能会发现种族主义和性别歧视的情况,然后可以制定预防措施和缓解计划。”

Caroline Sinders agrees and suggests to always be asking ‘how can this harm?’ and create use cases from the small to the extreme. “Use cases are not edge cases,” she warns. “If something can go wrong, it will.”

卡罗琳·辛德斯(Caroline Sinders)同意并建议始终询问“这种危害如何?” 并创建从小到大的用例。 她警告说:“用例不是边缘情况。” “如果出了什么问题,那就会。”

Sinders also recommends asking ourselves deeper questions: “Should we use facial recognition systems, and where does responsibility fit into innovation? Having a more diverse data set and a more diverse team doesn’t make the use of facial recognition any more ethical or better. It just makes this technology work better. But when it’s implemented into society, how does it harm people? Does that harm outweigh the good?”

Sinders还建议我们更深入地思考自己的问题:“我们应该使用面部识别系统吗?责任在哪里适合创新? 拥有更多样化的数据集和更多样化的团队不会使面部识别的使用更具道德性或更好。 它只是使这项技术更好地工作。 但是,一旦将其实施到社会中,它将如何危害人们呢? 害处大于好处吗?”

In this particular case, Sinders points out, the harm does outweigh the good, which is why cities like Oakland, Somerville, and San Francisco are outlawing the use of facial recognition in public spaces or used by bureaucratic or governmental entities and offices, such as police departments.

Sinders指出,在这种特殊情况下,危害确实大于好处,这就是为什么奥克兰,萨默维尔和旧金山等城市禁止在公共场所使用面部识别 ,或者禁止官僚或政府实体和办公室使用面部识别的原因 ,例如警察部门。

进行用户研究和测试 (Conduct user research and testing)

One way to help data scientists and developers look beyond the available data sets to see the larger picture is involving UX research in the development process, suggests market and UX research consultant Lauren Isaacson.

市场和UX研究顾问Lauren Isaacson建议,一种帮助数据科学家和开发人员超越可用数据集以查看更大范围的方法是将UX研究纳入开发过程。

“UX researchers can use their skills to identify the societal, cultural, and business biases at play and facilitate potential solutions,” she explains. “AI bias isn’t about the data you have, it’s about the data you didn’t know you needed. This is the reason qualitative discovery work at the beginning is crucial.”

她解释说:“ UX研究人员可以利用他们的技能来识别正在发挥作用的社会,文化和业务偏见,并促进潜在的解决方案。” “ AI偏见不是关于您拥有的数据,而是关于您不知道需要的数据。 这就是开始进行定性发现至关重要的原因。”

Isaacson says if we will be handing over life-affecting decisions to computer systems, we should be testing those systems for fairness, positive outcomes, and overall good judgment.

艾萨克森说,如果我们将影响生命的决策移交给计算机系统,我们应该测试这些系统的公平性,积极成果和总体良好判断力。

“These are very human traits and concerns not easily imparted to machines,” she warns. “A place to start with how we define them. If we can agree on how they are defined, then we can find ways to test for them in computer programs.”

她警告说:“这些是非常人性化的特征,不容易引起人们的关注。” “从我们如何定义它们开始。 如果我们可以就如何定义它们达成共识,那么我们可以找到在计算机程序中对其进行测试的方法。”

定义公平 (Define fairness)

Defining fairness in machine learning is a difficult task for most organizations, as it’s a complex and multi-faceted concept that depends on context and culture.

对于大多数组织而言,定义机器学习的公平性是一项艰巨的任务,因为这是一个复杂且涉及多个方面的概念,取决于上下文和文化。

“There are at least 21 mathematical definitions of fairness,” points out senior tech evangelist for machine learning and AI at IBM, Trisha Mahoney. “These are not just theoretical differences in how to measure fairness, but different definitions that produce entirely different outcomes. And many fairness researchers have shown that it’s impossible to satisfy all definitions of fairness at the same time.”

IBM机器学习和AI的高级技术专家Trisha Mahoney指出:“ 公平至少有21个数学定义 。” 这些不仅仅是在衡量公平性方面的理论差异,而且是产生完全不同结果的不同定义。 而且许多公平研究人员表明,不可能同时满足所有公平定义。”

So developing unbiased algorithms is a data science initiative that involves many stakeholders across a company, and there are several factors to be considered when defining fairness for your use case (for example, legal, ethics, trust).

因此,开发无偏算法是一项涉及整个公司的众多利益相关者的数据科学计划,在定义用例的公平性时需要考虑多个因素(例如法律,道德,信任)。

“As there are many ways to define fairness, there are also many different ways to measure and remove unfair bias,” Mahoney explains. “Ultimately, there are many tradeoffs that must be made between model accuracy versus unfair model bias, and organizations must define acceptable thresholds for each.”

“由于定义公平的方法有很多,因此也有很多测量和消除不公平偏见的方法,”马奥尼解释道。 “最终,必须在模型准确性与不公平的模型偏差之间做出许多权衡,并且组织必须为每个模型定义可接受的阈值。”

To help detect and remove unwanted bias in datasets and machine learning models throughout the AI application lifecycle, IBM researchers how developed an open source AI Fairness 360 toolkit, which includes various bias-mitigation algorithms as well as over 77 metrics to test for biases.

为了帮助在AI应用程序生命周期中检测和消除数据集和机器学习模型中的有害偏差,IBM研究人员如何开发了开放源代码AI Fairness 360工具包,其中包括各种偏差缓解算法以及77多个度量标准以测试偏差。

Another useful resource are Google’s AI principles and responsible AI practices. The section on fairness includes a variety of approaches to iterate, improve and ensure fairness (for example, design your model using concrete goals for fairness and inclusion), and there’s also a selection of recent publications, tools, techniques, and resources to learn more about how Google approaches fairness in AI and how you can incorporate fairness practices into your own machine learning projects.

另一个有用的资源是Google的AI原则和负责任的AI实践 。 关于公平的部分包括各种迭代,改进和确保公平的方法(例如,使用公平和包容性的具体目标来设计模型),还提供了一些近期出版物,工具,技术和资源,以了解更多信息。关于Google如何在AI中实现公平,以及如何将公平实践纳入自己的机器学习项目。

如何解决人工智能中的性别和种族偏见 (How to tackle gender and racial bias in AI)

So, educate yourself about bias (David Dylan Thomas’ Cognitive Bias podcast is a good starting point), try and spot your own unconscious biases and confront them in your everyday life. Seek out diverse perspectives, build diverse and inclusive teams, and keep asking yourself if the product you’re building has the potential to harm people. Implement this mindset right in the design process, so you can reduce risks. Also conduct user research, test your systems, define and measure fairness and learn which metric is most appropriate for a given use case.

因此,教育自己关于偏见的知识(David Dylan Thomas的Cognitive Bias播客是一个很好的起点),尝试发现自己无意识的偏见并在日常生活中面对它们。 寻求不同的观点,建立多元化和包容性的团队,并不断地问自己所制造的产品是否有可能伤害人们。 在设计过程中正确实施这种思维定势,可以降低风险。 还进行用户研究,测试系统,定义和衡量公平性,并了解哪种度量最适合给定的用例。

In part 2 we will explore projects that tackle gender and racial bias in AI and discover techniques to reduce them.

在第2部分中,我们将探索解决AI中性别和种族偏见的项目,并发现减少这些偏见的技术。

For more unique insights and authentic points of view on the practice, business and impact of design, visit Adobe XD Ideas.

有关设计的实践,业务和影响的更多独特见解和真实观点,请访问Adobe XD Ideas

To learn about Adobe XD, our all-in-one design and prototyping tool:

要了解Adobe XD,我们的多合一设计和原型制作工具:

  • Download Adobe XD

    下载Adobe XD

  • Adobe XD Twitter account — also use #adobexd to talk to the team!

    Adobe XD Twitter帐户 -也使用#adobexd与团队交谈!

  • Adobe XD UserVoice ideas database

    Adobe XD UserVoice 创意数据库

  • Adobe XD forum

    Adobe XD论坛

Originally published at https://xd.adobe.com.

最初发布在 https://xd.adobe.com上

翻译自: https://medium.com/thinking-design/removing-bias-in-ai-part-1-diverse-teams-and-a-redefined-design-process-5271867fe5fc

ai外呼机器人设计流程

你可能感兴趣的:(python,人工智能,设计模式,java,c++)