哲学家都干了什么
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” Isaac Asimov
“目前生活中最可悲的方面是科学收集知识的速度快于社会收集知识的速度。” 艾萨克·阿西莫夫(Isaac Asimov)
Nowadays, in a sudden emergency on the road while you’re driving, you’ll have to trust your instincts when choosing, in a split second, to steer to the left and hit a cat or turning right and hitting a wondering child. With the introduction of self-driving cars on our streets, the programmers that build the code for these machines, we’ll have to tell the computer driving the vehicle which direction should the car turn, and also, which target it ought to hit.
如今,在您驾车途中突然发生紧急情况时,您必须相信自己的直觉,才能瞬间选择向左转向并撞到猫或向右转并撞到一个好孩子。 随着我们在街道上引入自动驾驶汽车,为这些机器构建代码的程序员,我们将不得不告诉驾驶汽车的计算机应该向哪个方向转弯以及应该向哪个方向行驶。
Some clear moral and ethical implications already come to our minds when thinking about this, because, when our instincts decide for us, it’s natural not to blame someone for the consequences of a mid-jump-scare act. But, for example, how to define which poor pedestrian, animal or object on the sidewalk it’s more or less “hittable” for the Tesla that’s being programmed? Do you tell it to hit the victim that’s less likely to be killed by the accident, like a motorcyclist using a helmet?
考虑这一点时,我们已经想到了一些明确的道德和伦理涵义,因为当我们的直觉为我们做出决定时,自然不会将中间跳跃性行为的后果归咎于某人。 但是,例如,如何定义正在编程的特斯拉或多或少“可击中”人行道上的哪个可怜的行人,动物或物体? 您是否告诉它撞到不太可能因事故丧生的受害者,就像戴着头盔的电单车司机?
Well, we still don’t have the answer to this question. And before you start trying come up with an easy solution in your mind by, let’s say, steering the wheel to the target that’s most likely to survive (e.g.: a well-protected bus driver) try to really get the consequences of what’s being defined in the program: you’re basically making safety concerned people less safe, and causing a social pushback on personal security, especially when self-driving cars become omnipresent on the roads.
好吧,我们仍然没有这个问题的答案。 假设您在尝试提出一个简单的解决方案之前,先将方向盘转到最有可能幸存的目标(例如,受保护的公交车司机),尝试真正地了解所定义内容的后果在该计划中:您基本上是在使安全相关人员的安全降低,并在人身安全方面造成社会压力,尤其是当无人驾驶汽车在路上无处不在时。
In this article, we are not going to talk about biased and others wrong-doing algorithms because discriminative outcomes on those cases are usually not moral dilemmas, but just involuntary materialization of prejudices that exists in our society.
在本文中,我们将不讨论有偏见的算法和其他错误处理的算法,因为在这些情况下的歧视性结果通常不是道德困境,而只是我们社会中存在的非自愿偏见。
Needless to say, like most lawyers, medics and CEOs, programmers will need to have an ethical background for their work — or worse, no one has the background for those answers yet. As more disruptive technologies appear in the hands of our tech overlords and presidents, more clearly we’ll see the lack of established moral principles in humanity’s playbook.
毋庸置疑,像大多数律师,医生和首席执行官一样,程序员将需要具有其工作的道德背景-或更糟糕的是,还没有人拥有这些答案的背景。 随着越来越多的破坏性技术出现在我们的技术霸主和总裁手中,我们将更加清楚地看到,人类的剧本中缺乏既定的道德原则。
These discussions become very critical when we start talking about workforce automation and A.G.I. (artificial general intelligence) that, in the future, will control everything from your pacemaker to autonomous killer drones. That’s why people like Max Tegmark created the Future of Life Institute (FLI) for the mission of: “catalyzing and supporting research and initiatives for safeguarding life […]”.
当我们开始谈论劳动力自动化和AGI(人工通用情报)时,这些讨论将变得至关重要,而AGI(人工通用情报)将在未来控制从起搏器到自动杀手无人机的所有事物。 因此,像马克斯·泰格马克( Max Tegmark)这样的人创建了生命未来研究所(FLI) ,其宗旨是:“ 促进和支持研究和倡议,以维护生命[…]”。
The core team and volunteers of the institute are composed not only by natural science researchers and engineers, but also, philosophy and psychology brainiacs such as Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies. Max and its partners are actively trying to assemble an organization that’ll establish certain guidelines for future companies and solo adventurers not just in the winding realm of A.I., but also on climate, biotech and nuclear areas.
该研究所的核心团队和志愿者不仅由自然科学研究人员和工程师组成,还由哲学和心理学方面的专家组成,例如《超智能:道路,危险,策略》的作者尼克·博斯特罗姆。 Max及其合作伙伴正在积极尝试组建一个组织,该组织将为未来的公司和单身冒险家建立一些指导方针,这些指导方针不仅涉及AI领域,还涉及气候,生物技术和核能领域。
Of course, those kinds of technological moral gnomes not only exists in the programming world and are not a well-established consensus around the world. When Paul Berg organized the “Asilomar Convention on Recombinant DNA” (or just Asilomar Convention) in 1975, he was praised in history because of the ground-breaking consensus he and his colleagues were able to achieve on such delicate and powerful “god-like” methods humans put its hands into. But, at the time, of the 60 foreign scientists that were invited, none of them were Korean, Indian or even Chinese. As of today, the ethical discrepancy between the East and West is yet a great thing the scientific and technological world is facing. Currently, this difference can be seen in gene editing debates between China and western scientists.
当然,这些技术道德侏儒不仅存在于编程世界中,而且还不是全世界公认的共识。 保罗·伯格(Paul Berg)于1975年组织了《 关于重组DNA的阿西洛玛公约》 (或简称《阿西洛玛公约》)时,由于他和他的同事们能够在如此精致而有力的“神灵般的成就”上取得突破性的共识而在历史上受到赞扬。人类投入使用的方法。 但是,当时被邀请的60位外国科学家中,没有一个是朝鲜人,印度人甚至中国人。 截至今天,东西方之间的道德差异仍然是科技界所面临的重大问题。 目前,这种差异可以在中西方科学家之间的基因编辑辩论中看到。
Jumping to the corporate world, “practical philosophy” has been coming in handy for some big companies trying to make it right for their mistakes by hiring ethics professionals to re-write their Code of Conduct. In 2010, BP oil industry hired Roger Steare, a self-entitled corporate philosopher, that works by giving ethics seminars and asking pertinent questions to the executive board’s company. In this case, followed by BP’s huge oil spill scandal, Roger incorporated an ethical decision-making framework in the corporation’s code.
进入企业界后, “实践哲学”已对一些大公司派上用场,这些公司试图通过聘请道德专业人员改写其《行为准则》来纠正自己的错误。 在2010年,BP石油工业聘请了一位自称为公司哲学家的罗杰·史塔雷(Roger Steare),他的工作原理是举行道德操守研讨会,并向执行董事会的公司提出相关问题。 在这种情况下,随后是BP巨大的漏油丑闻 ,Roger将道德决策框架纳入了公司代码。
But if you’re like most people, you’d normally trust government regulations, especially on such topics that the majority of the population just doesn’t have the qualification to have a word about. Well, turns out that you can’t rely on them either (at least not the current American Congress).
但是,如果您像大多数人一样,通常会信任政府法规,尤其是在大多数人口没有资格发表讲话的话题上。 好吧,事实证明您也不能依赖它们(至少不是当前的美国国会)。
In the past years, we’ve seen names like Facebook’s Mark Zuckerberg, Amazon’s Jeff Bezos, Apple’s Tim Cook and others big tech heads going to important hearings to meet by the Senate’s antitrust committee and testify explaining the suspicions people had on their companies — especially one reunion that had all of them simultaneously. And, the only thing that was clarified at those meetings, was the complete lack of technological familiarity that most members of the congress had, a legit boomer show. Well, you couldn’t expect more from 60’s political gargoyles, but in the context of the problems we face today, things like this show how tragic the situation can become, largely because those are the people running a very important country for the world’s code of conduct on the subject of innovations.
在过去的几年中,我们已经看到Facebook的Mark Zuckerberg ,Amazon的Jeff Bezos,Apple的Tim Cook以及其他大型科技公司的头目参加重要的听证会,由参议院反托拉斯委员会开会,并作证解释人们对公司的怀疑, 特别是一次团圆,使他们全部同时参加 。 而且,在这些会议上唯一可以澄清的是, 合法的临时工节目显示出 ,大多数国会议员完全缺乏对技术的了解。 好吧,您不能指望60年代的政治石像鬼,但是在我们今天面临的问题的背景下,像这样的事情表明情况会变得多么悲惨,主要是因为那些人正在为世界法规管理着一个非常重要的国家关于创新的行为。
As of today, we are in the midst of a technological race, private companies and governments are reaching the pinnacle of over-innovative and disruptive technologies, just like we witnessed with atomic bombs and rocket science in the past. Then and now, most humans wouldn’t comprehend it’s gearing, how it’s reasonable for us to understand it’s moral consequences? Well, just like we hire consultants for beer-making when our industrial chemical machinery doesn’t work properly, we should hire philosophers for our eventual professional trolley problems, or face the consequences.
截至今天,我们正处于技术竞赛之中,私人公司和政府正达到过分创新和颠覆性技术的顶峰,就像我们过去见证原子弹和火箭科学一样。 时不时地,大多数人不会理解它的传动装置,对于我们来说,理解它的道德后果如何合理? 好吧,就像我们在工业化学机械无法正常工作时聘请啤酒制造顾问一样,我们也应该聘请哲学家解决我们最终遇到的专业手推车问题,否则后果自负。
翻译自: https://medium.com/@thisbra/coding-and-ethics-why-the-tech-market-needs-philosophers-644a4732137
哲学家都干了什么