ai人工智能测面相 准吗_AI会进入法庭吗?

ai人工智能测面相 准吗

In 2017, U.S. state trial courts received a gastronomical 83 million court cases.

2017年, 美国各州初审法院收到了8300万份美食佳肴。

The Chinese Civil Law system sees over 19 million cases per year, with only 120,000 judges to rule over them.

中国大陆法系每年受理超过1900万起案件 ,只有12万名法官来裁决。

In the OECD area (consisting of most high-income economies), the average length for civil proceedings is 240 days in the first instance; the final disposition of cases often involves a long process of appeals, which in some countries can go up to 7 years.

在经合组织地区(由大多数高收入经济体组成), 民事诉讼的平均初审时间为240天; 案件的最终处理通常涉及漫长的上诉过程,在某些国家中,上诉过程可能长达7年。

It’s no secret that the judiciary system in many countries is long, tedious, slow, and can cause months of misery, pain, and anxiety to individuals, families, corporations, and litigators.

众所周知,许多国家的司法系统漫长,乏味,缓慢,会给个人,家庭,公司和诉讼人造成数月的痛苦,痛苦和焦虑。

Moreover, when cases do see the light of day in court, the outcome is not always satisfactory, with high-profile cases especially receiving criticism for being plagued by judge biases’ and personal preferences. Scholarly research suggests that in the United States, judges’ personal backgrounds, professional experiences, life experiences, and partisan ideologies might impact their decision-making.

此外,当案件确实在法庭上得到解决时,结果并不总是令人满意的,备受关注的案件尤其受到法官偏见和个人喜好困扰的批评。 科学研究表明 ,在美国,法官的个人背景,专业经验,生活经验和游击党意识形态可能会影响他们的决策。

One thing is clear: judiciary systems across the globe are in desperate need of reform.

有一件事很清楚: 全球司法系统迫切需要改革。

AI & automation might just be the solution.

AI和自动化可能只是解决方案。

法庭上的偏见与疲劳 (Bias & Fatigue in the Courtroom)

Let me start by saying that judges are, after all, human beings. Juries that vote on rulings too, consist of human beings. What this means is that judges and juries alike experience the same pitfalls that you or I do: like us, their perceptions, expectations, and biases color the way that they see the world.

首先,我要说法官毕竟是人。 也对裁决进行投票的陪审团由人组成。 这意味着法官和陪审团都遇到您或我遇到的相同陷阱:像我们一样,他们的看法,期望和偏见使他们看待世界的方式变色。

Even for those who consider themselves fairly egalitarian and open-minded, implicit bias can creep up on even the best amongst us. As one paper argues, given that cognitive and social psychologists have demonstrated that human beings often act in ways that are not rational, implicit biases in the courtroom might be even more pervasive than explicit ones.

即使对于那些自以为是平等主义,思想开放的人, 内在偏见也可能蔓延到我们中间最好的人身上。 正如一篇论文所论证的那样,鉴于认知和社会心理学家已经证明了人类常常以非理性的方式行事,法庭上的隐性偏见可能比显性的偏见更为普遍。

Being human also means that we’re susceptible to human-like weaknesses, such as fatigue, sleep-deprivation, and foggy thinking. A controversial study from 2012, found that when deciding upon whether or not a prisoner should be granted parole, the percentage of rulings in favor of the prisoner dropped from 65% to zero within each decision session depending on how soon after a break the decision-making occurred.

成为人类还意味着我们容易遭受类似人类的弱点,例如疲劳,睡眠不足和思维模糊。 2012年的一项颇具争议的研究发现,在决定是否应准许假释囚犯时,在每个决策会议中,支持该囚犯的裁定所占的百分比从65%降至零,具体取决于中断决定后的时间:发生。

In other words, following long sessions without breaks, a hungry judge may rule unfavorably regardless of the facts of the case.

换句话说,经过长时间的不间断休息,无论案件的事实如何,饥饿的法官都可能做出不利的裁决。

研究告诉我们什么 (What the Research Tells Us)

Scholars have found that the mere presence of a black judge could change how an appellate panel deliberates; meaning that observing a black judge cast a vote might encourage white colleagues to vote differently.

学者们发现,只有黑人法官的存在才能改变上诉小组的审议方式。 这意味着观察黑人法官进行投票可能会鼓励白人同事以不同的方式投票。

In estimating the relationship between gender and judging, researchers Gill, Kagan & Marouf found that all-male appeals panels hearing immigration appeals are much harsher with male litigants than they are with female litigants.

在估计性别与审判之间的关系时,研究人员Gill,Kagan和Marouf发现,男性移民诉讼中的全男性上诉小组比女性诉讼当事人要苛刻得多。

Additionally, when studying implicit biases, researchers found that white judges show strong implicit attitudes towards favoring whites over blacks.

此外,在研究内隐偏见时, 研究人员发现,白人法官表现出强烈的内隐态度,偏向于白人而非黑人。

These studies, among many others, indicate that the “lived experience” of the judge may have some impact on the judge’s decision-making. In multiple speeches, U.S. Judge Sonia Sotomayor made an infamous comment about this that garnered a lot of controversy, but captures the concept quite well:

这些研究以及其他许多研究表明,法官的“实际经验”可能会对法官的决策产生一些影响。 美国法官索尼亚·索托马约尔(Sonia Sotomayor) 在多次演讲中对此发表了臭名昭著的评论,引起了很多争议,但很好地体现了这一概念:

“I would hope that a wise Latina woman with the richness of her experiences would, more often than not, reach a better conclusion.”

“我希望拥有丰富经验的明智的拉丁裔妇女常常能得出更好的结论。”

We can’t deny that all of us are inevitably subject to cognitive biases at some point or another. Either way, whether intentional, unintentional, explicit or implicit, subjectivity, and bias in the courtroom is a difficult thing for us to come to terms with.

我们不能否认我们所有人不可避免地在某些时候或某些时候遭受认知偏见 。 无论哪种方式,无论是有意的,无意的,明示的还是隐含的,主观的以及在法庭上的偏见都是我们难以接受的。

The fate of thousands of individuals, after all, lies in the hands of people who are susceptible to their perceptions & poor decision-making just as you or I might be.

毕竟,成千上万个人的命运掌握在像您或我一样容易受到他们的感知和决策能力差的人的手中。

法庭上的人工智能案例 (The Case for Artificial Intelligence in the Courtroom)

This is exactly where automation and AI come into play.

这正是自动化和AI发挥作用的地方。

The applications of AI & automation in the courtroom are two-pronged, intended to address two key issues in judicial systems.

人工智能和自动化在法庭上的应用是两方面的,旨在解决司法系统中的两个关键问题。

First, when it comes to bias, robo-judges will be able to bypass human shortcomings.

首先,谈到偏见时,机器人法官将能够绕过人为的缺点。

As I wrote in a previous article, advances in deep learning might potentially give rise to what we perceive as “human-level general artificial intelligence”.

正如我在上一篇文章中所写的那样,深度学习的进步可能会产生我们认为的“人类级通用人工智能”。

Given the multiple data points that a deep-learning AI is exposed to, it could have the ability to tap into neural networks that, much like the human brain, can make observations, patterns, decisions, and judgments.

考虑到深度学习AI所面临的多个数据点,它可以利用神经网络,就像人的大脑一样,可以做出观察,模式,决策和判断。

However, unlike the human brain, deep-learning AI systems will be able to parse so many data points that it could eliminate the probability of bias. A robo-judge would have the ability to sift through years and years of historical case data as well as assess all of the facts of a case that it can then feed to its decision-trees.

但是, 人脑不同 ,深度学习AI系统将能够解析如此多的数据点,从而可以消除偏差的可能性。 一名机器人法官将有能力筛选出多年的历史案件数据,并评估案件的所有事实,然后将其提供给决策树。

These decision-trees, a part of its neural network, will ultimately help the AI achieve the goal that it is programmed to achieve: in this case, the goal could be to deliver a ruling, estimate the appropriate the length of a sentence, a pardoning, an appeal, and so on.

这些决策树是其神经网络的一部分,最终将帮助AI达到其编程要实现的目标:在这种情况下,目标可能是做出裁决,估计适当的句子长度,赦免,上诉等。

We’re already using algorithms and technologies like IBM’s Watson to make an evidence-based analysis of risks in all sorts of industries: finance, healthcare, manufacturing.

我们已经在使用诸如IBM Watson之类的算法和技术对金融,医疗保健,制造业等各种行业的风险进行基于证据的分析。

We can similarly use AI to determine the likelihood, for instance, of a convicted felon to go back to repeating a crime, based on historical data that the AI can access. Unlike a human judge, the hope is that a robo-judge would be able to make an objective decision based on all of the data points and facts of a case that it has.

我们可以类似地使用AI根据AI可以访问的历史数据来确定被定罪的重罪犯再次犯案的可能性。 与人类法官不同,希望机器人法官能够根据案件的所有数据点和事实做出客观的决定。

Because an AI is unlikely to have the so-called “lived experience” that a human judge would, the chances of biased decision-making may dramatically decrease.

由于AI不太可能具有人类法官所具有的所谓真实经验” ,因此有偏见的决策机会可能会大大减少。

Moreover, as we’ve seen with the present-day application of automated tools, machines bypass common human weaknesses such as fatigue.

此外,正如我们在自动化工具的当今应用中所看到的那样,机器绕过了人类常见的弱点,例如疲劳。

Secondly, as a precursor to robo-judges, AI & automation tools can be leveraged in the short-run to aide human judges to make effective decisions.

其次,作为自动判断的先驱,可以在短期内利用AI和自动化工具来辅助人类法官做出有效的决策。

The use of automated tools could drastically reduce the time that it takes to gather the facts of a case and historical data on similar cases. For a more advanced and nuanced application, AI systems can also help us detect lies from truth in a more effective way. The current tools that we use, such as the individual judge’s perception & the polygraph, are too inconclusive and unreliable to use in the court given the multiple factors that can affect the results.

使用自动化工具可以极大地减少收集案件事实和类似案件历史数据所花费的时间。 对于更高级和细微的应用程序,人工智能系统还可以帮助我们以更有效的方式从真相中发现谎言。 鉴于可能影响结果的多种因素,我们目前使用的工具(例如个别法官的看法和测谎仪)尚无定论,在法庭上也不可靠。

Today, AI is already being used to revolutionize mental health treatments by detecting things that human therapists can’t.

如今,通过检测人类治疗师无法做到的事情,人工智能已被用于彻底改变心理健康治疗。

Consider Ellie, a virtual therapist launched by the Institute for Creative Technologies. Ellie, who was designed to treat veterans with PTSD, can not only detect verbal cues but can also pick up non-verbal cues (facial expression, gestures, micro-expressions) that may be difficult for a human therapist to pick up. Based on these cues, Ellie makes recommendations to her patients. As one can imagine, Ellie has a lot more subtle data to base her recommendations on than a human therapist might.

考虑一下 艾丽(Ellie),这是由创新技术研究所(Institute for Creative Technologies)发起的虚拟治疗师 旨在治疗PTSD退伍军人的艾莉(Ellie)不仅可以检测口头提示,而且还可以拾起人类治疗师可能难以拾起的非言语提示(面部表情,手势,微表情)。 基于这些提示,Ellie向她的患者提出了建议。 可以想象,与人类治疗师相比,艾莉(Ellie)有很多微妙的数据可以作为她的推荐依据。

Similarly, virtual avatar judges — conceptually designed to conduct face-to-face interactions via video conference tools — may be able to pick up on cues that human judges or litigators would otherwise miss.

同样,虚拟化身的法官(在概念上旨在通过视频会议工具进行面对面的互动)可能会获得人类法官或诉讼人否则会错过的线索。

法庭上的AI:今天 (AI in the Courtroom: Today)

Estonia and China are two countries that have already begun to pilot the use of AI in the courtroom.

爱沙尼亚和中国是两个已经开始在法庭上试用AI的国家。

The Estonian Ministry of Justice is working on a project to build “robo-judges” that can adjudicate small claims disputes. Conceptually, the two parties would upload their documents and data onto the system, and the AI will issue a decision that can if needed, be appealed by a human judge.

爱沙尼亚司法部正在研究建立可以裁定小额钱债纠纷的“机器人法官”的项目。 从概念上讲,两方会将其文档和数据上载到系统中,并且AI将发布决定,如果需要,可以由人类法官提出上诉。

Similarly, China has already introduced over 100 robots to its courts — these robots retrieve past verdicts and sift through large amounts of data.

同样, 中国已经向法院引进了100多个机器人-这些机器人检索过去的判决并筛选大量数据。

China has also introduced Xioafa, a robot that can offer legal advice to the public and help break down complex legal terminologies for the layman.

中国还引入了Xioafa ,这是一种可以向公众提供法律咨询并帮助打破外行复杂法律术语的机器人。

后退的挑战和原因 (The Challenges & Reasons for Push-back)

The looming question over all of this, especially if the goal is to eliminate bias in the courtroom is: can programs be neutral actors?

所有这些迫在眉睫的问题,特别是如果目标是消除法庭上的偏见,那就是: 方案可以成为中立的参与者吗?

One argument against program bias is that the set of questions being posed to an AI are always from the same demo-graph — young, white, male programmers who typically write these algorithms & feed data to the AI.

反对程序偏见的一个论点是,向AI提出的问题集总是来自同一人口统计数据-年轻,白人,男性的程序员,通常会编写这些算法并将数据馈送到AI。

But as the technology evolves, we might find ways to make it fool-proof. Programs may be able to test themselves against discrimination. The key advantageous factor that machines have over humans is that machines have the ability to store, compute, and account for hundreds of thousands of data points.

但是随着技术的发展,我们可能会找到使其变得万无一失的方法。 程序可能能够测试自己是否受到歧视。 机器胜过人类的关键优势在于,机器具有存储,计算和处理数十万个数据点的能力。

Let’s consider the example of ZestFinance, a credit loaning company that aims to avoid discriminatory lending. ZestFinance was founded on the idea that by looking at tens of thousands of data points, ML programs can expand the number of people deemed creditworthy. The machine learning model is run through ZAML Fair to ascertain if there are any differences across protected classes and if there is, what variables are causing those differences. To test against discrimination, the lender can increase and decrease the influence of the variables to lessen bias and increase accuracy.

让我们考虑一下ZestFinance的示例,该公司是一家旨在避免歧视性贷款的信用贷款公司 。 ZestFinance的想法是,通过查看成千上万的数据点,机器学习程序可以扩大被认为值得信赖的人数。 机器学习模型通过ZAML Fair运行,以确定受保护类之间是否存在任何差异,以及是否存在导致这些差异的变量。 为了测试是否存在歧视,贷方可以增加和减少变量的影响,以减少偏差并提高准确性。

ZestFinance takes into account that income is not the only predictor of credit-loaning worthiness; it’s the combination of income, spending, and the cost of living in any city.

ZestFinance考虑到收入并不是信贷资产价值的唯一预测因素; 它是收入,支出和任何城市的生活成本的总和。

It can be difficult to come to terms with the decisions made by a robo-judge, especially if they are adversary, negatively affect an individual’s life, and are still speculated by the public to contain some bias.

很难接受机器人法官的决定,特别是如果这些决定是敌对的,会对个人的生活产生负面影响,并且仍然被公众认为会包含一些偏见。

For this reason, in the interim, to ease into the use of AI in the court system, it might make sense to make room for a human judge to appeal a decision made by a robo-judge, if needed.

因此,在暂时的情况下,为了在法院系统中简化AI的使用,可能有必要为人类法官腾出空间,以在必要时对机器人法官的裁决提出上诉。

But eventually, as is the case with the evolution of any controversial idea, we might realize that the bias and margin of error by a robo-judge is significantly lower than that of a human judge. Just like self-driving cars can help prevent and save hundreds of thousands of lives, so too can robo-judges.

但是最终,就像任何有争议的思想发展起来的情况一样,我们可能会意识到,机器人法官的偏见和误差幅度明显低于人类法官。 就像自动驾驶汽车可以帮助预防和挽救数十万人的生命一样,自动驾驶汽车法官也可以这样做。

In the meantime, what we, as humans, need to be focusing on is the reasonable and responsible use of AI in society.

同时,作为人类,我们需要关注的是在社会中合理且负责任地使用AI。

We need to build a body of law and ethics that can address the many challenges and changes that will come with a future in which we co-exist with machines.

我们需要建立一套法律和道德体系,以应对与机器并存的未来带来的许多挑战和变化。

This way, when it happens, we’re ready, and when we leverage technology, it’s for the benefit of society at large, rather than its detriment.

这样,当它发生时,我们就准备好了,当我们利用技术时,它是为整个社会造福的,而不是有害的。

翻译自: https://medium.com/mapping-out-2050/will-ai-ever-enter-the-courtroom-52757ed9a527

ai人工智能测面相 准吗

你可能感兴趣的:(人工智能,python)