Building Trustworthy AI Products

Building Trustworthy AI Products_第1张图片
图片发自App

Aparna Chennapragada, Technical Assistant to Google CEO, gave an interesting talk on building trustworthy AI-empowered products at the event Product That Count.

From her experience of working with products that involve AI and lots of data, she sees three dimensions on making such products trustworthy: AI (processing of data), UI (interactions), I (personalization for users).

I’ll compile my notes from the talk later, but below are points I found most interesting:

For “AI":

If you solve problems that are hard for humans and easy for machines, it builds up trustworthiness.

For example, Google makes word corrections for wrong spellings, or provides you great results out of a world of information based on certain search keywords. An opposite example is making a robot use hand gestures or speak natural languages — humans do that much better than machines at the moment, while technologies on those topics are still far from ideal.

For “UI”:

People may have different expectations for your AI products. Some just want the outcome and expect magic to happen, while others expect the results displayed together with explanations from the system, so they can tell how & why these results come. Find the balance between this, from sources such as user studies.

For “I"

Give users opportunities to teach the machine, do not always try to guess what they like. Build mechanism that support this, so that user can give you better feedback on how your data product does it job. For example, Facebook auto detects people’s faces from photos, but sometimes it asks users “Who’s this?” so that they can teach Facebook to learn to better do its job.

你可能感兴趣的:(Building Trustworthy AI Products)