This summer, I learned that there is no such thing as a good set of communities.
今年夏天,我了解到没有像这样的社区。
There are, however, many ways to find good communities.
但是,有很多方法可以找到好的社区。
That’s what my math research was about — algorithms that find communities. There’s a lot of them, and each algorithm has its own merit — some are faster for a computer to process, some provide more information, some prioritize a community’s nature as close knit, some prioritize a community’s nature of being distinct from others.
这就是我的数学研究的目的-查找社区的算法。 它们很多,每种算法都有自己的优点-有些算法可以使计算机处理起来更快,有些算法可以提供更多的信息,有些算法优先考虑社区的紧密联系,有些则优先考虑社区与其他社区的区别。
There are certainly worse and better algorithms, and there are certainly worse and better communities. But there is no good set of communities, because the answer can change, depending on what value one is optimizing for, and what kind of information is looking for.
当然会有越来越差的算法,而且肯定会有越来越差的社区。 但是没有好的社区集合,因为答案可能会改变,这取决于人们正在优化的价值以及正在寻找的信息类型。
I learned this summer that an algorithm provides me possibilities, but I was in charge of judging what was good.
今年夏天,我了解到一种算法可以为我提供可能性,但我负责判断什么是好的 。
I was learning how to make judgement calls.
我正在学习如何进行判断。
It’s weird being a budding mathematician. You spend all these years learning how to do something an algorithm can do, if not approximate, with ease, and then you’re looking at yourself like, “what was the point of it all?” And really, even with the summer research, that sentiment is there — the algorithms don’t particularly need you to do their job.
作为一个崭露头角的数学家,这很奇怪。 这些年来,您花了很多时间学习如何轻松地完成算法可以完成的工作,即使不是近似算法也是如此,然后您将自己看成是“这有什么意义?” 的确,即使在夏季进行了研究,这种感觉还是存在的-算法并不特别需要您来完成工作。
But they do.
但是他们做到了。
I am, of course, the one who judges the results of my algorithms, deems them good in ways other than debugging code — I look at the results and I think of my experience, and I think of heuristics, and I decide the very subjective matter of whether or not these results are good. This is high level judgement — high level subjectivity. And this isn’t something computers can’t do — they measure and optimize towards named, objective, good.
我当然是评判算法结果的人,认为它可以通过调试代码以外的方式来评估它们的优劣-我着眼于结果,想到了我的经验,想到了启发式方法,并且决定了非常主观的方法这些结果是否良好都无关紧要。 这是高水平的判断-高水平的主观性。 这不是计算机不能做的事情–它们会针对命名,客观和良好进行衡量和优化。
On another level, I make judgement calls elsewhere: low level judgement calls, in data cleaning. Deciding whether, for example, “horses” is a particularly useful way to describe a video game or whether it should be cleaned away. A judgement call, one made by intuition and experience.
在另一个层面上,我在其他地方进行判断调用:在数据清理中的底层判断调用。 例如,确定“马”是描述视频游戏还是应将其清除的一种特别有用的方法。 判断召唤,是凭直觉和经验做出的。
I’m thinking of this during the summer, and I encounter the results of the invite-only API of GPT-3, a language machine learning AI built by OpenAI. Some demos of what this AI can do include writing code on written command, toning down harsh emails, even writing fiction with merely a prompt. It was a content creating machine, and it wrote in a very human manner.
我在夏天就在想这个,我遇到了GPT-3的仅邀请API的结果,GPT-3是由OpenAI构建的一种语言机器学习AI。 该AI可以做什么的一些演示包括在书面命令上编写代码 , 淡化苛刻的电子邮件 ,甚至仅凭提示就写小说 。 这是一台内容创建机器,它以非常人性化的方式编写。
So perhaps, for a moment, my friends in the humanities feel that prickle of dread. The feeling where they spend so much time getting good at a skill, at writing, and now they see the beginnings of a robot doing it with so much less time and effort. It’s very simply a matter of time before AI is capable of generating stories on its own.
因此,也许一会儿,我的人文学科的朋友们感到那种恐惧的刺痛。 他们花了很多时间在技巧,写作上变得熟练的感觉,现在他们看到了机器人的开始,用更少的时间和精力去做。 AI能够自行生成故事只是时间问题。
But the writer isn’t in danger — very likely, a writer will be affected by language AI in a similar way the artist is affected by design AI: they will be augmented, and find their work eased.
但是作者并没有处于危险之中–很有可能,作家会受到语言AI的影响,就像艺术家受到设计AI的影响一样:他们会得到增强,并且工作会变得轻松。
Consider Colormind, an AI-based color palette generator. Simply put, this AI uses a database of photography to determine trends in what colors “go together”, and, upon request, provide them to the artist to either accept or edit as need be. This action severely reduces the amount of effort an artist needs to do in order to exercise judgement — after all, there is no right, or objectively good, set of colors for an artwork, but there are certainly better ones. AI helps find those.
考虑Colormind ,这是一个基于AI的调色板生成器。 简而言之,该AI使用摄影数据库来确定哪种颜色“融合在一起”的趋势,并根据要求将其提供给艺术家以根据需要接受或编辑。 这种行为严重减少了艺术家进行判断所需的工作量-毕竟,艺术品没有正确的,或者客观上好的颜色,但是肯定有更好的颜色。 人工智能帮助找到这些。
An AI suggests possibilities. An artist decides what is beautiful.
人工智能暗示了可能性。 艺术家决定什么是美丽的。
It’s a judgement call, for an artist.
对于艺术家来说,这是一个判断电话。
Consider the possibilities in a similar light for the writer. They can train the AI so it’s able to pick up on and mimic the individual’s writing pattern. Perhaps to tweak the parameters to avoid going off the rails. But then, the writer can, should they ever need to in the process of creativity, ask the AI to write a few sentences to continue forward. Maybe to explore possibilities, or maybe to break writer’s block — either way, the AI produces a rough draft that the writer exercises judgment upon. They can keep it, edit it, scrap it — either way, the AI augments their process, serving as a sort of piecewise assistant.
从类似的角度考虑作者的可能性。 他们可以训练AI,以便它能够模仿个人的写作模式 。 也许要调整参数以避免偏离轨道。 但是,如果他们在创造过程中需要的话,作家可以要求AI写下一些句子以继续前进。 也许是探索可能性,或者是打破作者的障碍,无论哪种方式,AI都会产生一个粗略的草稿,供作者进行判断。 他们可以保留,编辑,删除它—无论哪种方式,AI都可以充当分段助手来增强他们的流程。
Currently, it’s because the AI has a bit of a poor memory — it forgets what it has mentioned a few sentences ago. But in the future, with that memory problem solved, if the article was for human purposes and consumption, you’d still need an editor to exercise judgment on the work, before it can be published.
目前,这是因为AI的记忆力很差-它忘记了之前提到的几句话。 但是在将来,随着记忆问题的解决,如果这篇文章是出于人类目的和消费目的,您仍需要编辑对作品进行判断,然后才能发表。
An AI is a glorified content producer. A human will have to determine what communication makes sense, what’s actually important and impactful and good.
人工智能是荣耀的内容生产者。 一个人将必须确定什么交流才有意义,什么才是真正重要的,有影响力的和有益的。
It’s a judgement call for a writer.
这是对作家的判断。
Even the rollout of AI into other, less overtly creative fields will not take away the role of the human who exercises judgment. Consider AI-augmented radiology. A test like a 24 hour hour EEG will take a while for a doctor to read — but AI could do the first sweep, find abnormalities, allow the doctor to look at the “highlight reel” to see what abnormalities mattered. Of course, the doctor can always look at the raw data — but the point of the highlight reel is to make more ordinary cases less time and energy consuming.
即使将AI推广到其他不太明显的创新领域,也不会夺走行使判断力的人的作用。 考虑AI增强放射学。 像24小时小时脑电图这样的测试将需要一段时间才能让医生朗读-但是AI可以进行第一次扫描,发现异常情况,并让医生查看“重点提示”,看看有什么异常情况。 当然,医生总是可以查看原始数据,但是精彩片段的目的是使更多的普通病例减少时间和精力。
An AI can do the first analysis. A doctor uses clinical instincts to put it all together and build a diagnosis for it as a whole.
AI可以进行第一个分析。 医生利用临床直觉将所有这些因素组合在一起,并对其进行整体诊断。
It’s a judgement call for a physician.
这是对医生的判断。
Every job that has that threat of being “taken over by AI” really has the possibility of being augmented by AI. Because, although AI will be able to produce code and writing and art tirelessly — even though AI can and will produce everything better than a human does — that is not everything. AI can do everything.
确实有被“ AI接管”威胁的每项工作实际上都有可能被AI增强。 因为,尽管AI能够不懈地产生代码,写作和艺术-尽管AI能够并且将产生比人类更好的一切,但事实并非如此。 人工智能可以做所有事情。
Everything, but deciding what to do, why to do it, and when is the task truly accomplished.
一切,但决定做什么 , 为什么做,而当真正完成了任务。
Everything, but judgement.
一切,但判断。
But not all judgement is created equally. I mentioned earlier the tedium of data cleaning and curating — a sort of judgement which doesn’t need a particularly specialized set of knowledge. It requires the sort of judgement we use every day — the kind we use determining if the living room is clean enough for the guests, if we actually want to spend the time reading the article sent to us, if we’ve got enough energy to get that last bit of work done.
但是并不是所有的判断都是平等的。 我之前提到过数据清理和整理的繁琐工作,这是一种不需要特别专业知识的判断。 它需要我们每天使用的一种判断力-我们用来确定客厅是否足够清洁给客人使用,如果我们真的想花时间阅读发送给我们的文章,以及是否有足够的精力来判断的话完成最后的工作。
Data cleaning, content curation, and other efforts that create and maintain AI are the low barrier, high judgement work which will become, and already is, prolific with the rise of AI.
数据清理,内容策展,并创建和维护AI等努力是低壁垒,高的判断工作,这将成为,并且已经是多产与AI的崛起。
But as the labor market time and time again shows — low barrier to entry work, even with high judgment, gives leverage to businesses (because there’s a lot of people skilled enough to figure out how to do the work) and makes subpar working conditions for those kinds of workers. They are considered ghost workers for a reason — they are oft unseen, oft unheard, and face the risk of being forgotten amongst the loud clamor of the day to day world.
但是,正如劳动力市场一次又一次地显示出的那样-即使有很高的判断力,较低的入职门槛也能给企业带来杠杆作用(因为有很多熟练的人来弄清楚如何做工作),并且为那些工人。 由于某种原因,他们被认为是幽灵工人-他们经常被人看不见,经常被人听到,并且面临着在日常喧闹的世界中被遗忘的风险。
Considering that production AI allows for the centralization of high barrier/high level judgement — it’s very likely we’ll face a concentration of judgement based work at the poles, where the differences in work is based on whether one augments the AI or if the AI augments the individual. Those who work for the AI will struggle, and it’ll be a moral imperative for our generation to preserve their humanity and quality of opportunity.
考虑到生产AI可以实现高障碍/高水平判断的集中化-很可能我们将面临基于判断的工作集中在两极,其中工作的差异取决于一个人是增强AI还是该AI增强个人。 那些为AI工作的人会奋斗,这对我们这一代人来说在道德上必须维护他们的人性和机会质量。
Those who the AI works for will have power, will have the ability to exercise big picture judgement, and to answer the question of what is good.
那些为AI工作的人将具有力量,将有能力进行全局判断,并回答什么是好的问题。
Some — influencers, pundits, specialists, experts — will be deemed by society to have good judgement, and thus win big, as more and more people trust them, their words, and their crafted, curated, production. Their judgement on what is worth our existence from the sea of things they can possibly choose — to endorse a product, to support a fact, to help another person — will be echoed across everyone else who also trusts their judgement, and imitated.
随着越来越多的人相信他们,他们的话语以及精心制作的作品,社会将认为其中一些人(有影响力的人,专家,专家,专家)具有良好的判断力,因此会赢得巨大的成功。 他们会从他们可能选择的事物之海中对我们值得存在的事物做出的判断-认可产品,支持事实,帮助另一个人-也将在所有也相信他们的判断并被模仿的其他人中得到回应。
That is power, in the modern world.
在现代世界中,这就是力量。
That is the ability to impose their definition of good, of worth attention, of worth time, upon technology and all that supports and is influenced by it. The definition of good chosen by those in power is based on assumptions and biases — as is every definition of good. The fear becomes that so few people actually are allowed to set that definition of good that we believe that this type of good is the only one that exists.
这就是将其对良好,值得关注,值得时间的定义强加于技术以及所有受其影响的技术的能力。 当权者对商品的定义是基于假设和偏见的,就像对商品的每种定义一样。 人们担心的是,实际上很少有人可以设定商品的定义,我们相信这种商品是唯一存在的商品。
Because a risk with AI, and a risk with processing constantly taken care of by another individual, is that we run the risk of forgetting the fact that these processes may perpetuate existing biases. We already know that this is possible, and very likely it’ll only get worse in time, with the introductions of more algorithms.
因为AI带来的风险以及处理过程中另一个人不断照顾的风险是我们冒着忘记这些流程可能使现有偏差永久化这一事实的风险。 我们已经知道这是可能的 ,并且随着更多算法的引入,很可能随着时间的推移只会变得更糟。
Take, for instance, GPT-3’s predecessor, the GPT-2 language AI. When tested for its ability to generate fiction, the author Rachael Perks noticed something odd: the prevalence of gender bias in the description of characters. The AI would focus on the physical beauty of female characters, show gender bias in assigning jobs (a character simply described as a computer programmer would always be described as a “he”, for example), and reflect other stereotypes about characters and roles as it’s described to them.
以GPT-3的前身,即GPT-2语言AI为例。 在测试其产生小说的能力时 ,作者Rachael Perks注意到了一个奇怪的现象:在角色描述中普遍存在性别偏见。 人工智能将专注于女性角色的外表美感,在分配工作中表现出性别偏见(例如,一个简单地描述为计算机程序员的角色将始终被描述为“他”),并将其他关于角色和角色的刻板印象反映为向他们描述了。
The writer, there, exercised judgement and corrected for what they believed to be not good. That’s their role now, as an overseer and editor and visionary.
作家在那里做出判断并纠正他们认为不好的东西。 这就是他们现在的角色,担任监督,编辑和有远见的人。
However, to lose sight of that relationship introduces complexities. Take, for instance, a GPT-3 model trained on a specific writer’s work, to help them. It may simply echo back the old habits and traits of the writer, emphasizing past vocabulary, past structure, past habits. They may further echo back an author’s biases about other people at the writer. A writer wouldn’t be forced to evolve past their old habits — they would have to consciously choose to be better, something harder to do than an individual simply writing, existing, and growing inch by inch.
但是,忽视这种关系会带来复杂性。 例如,针对特定作家的作品进行训练的GPT-3模型可以为他们提供帮助。 它可以简单地呼应作者的旧习惯和特质,强调过去的词汇,过去的结构和过去的习惯。 它们可能会进一步反映出作者对其他人的偏见。 作家不会被迫摆脱过去的习惯-他们将不得不自觉地选择变得更好,这比个人简单地写作,生存并日益增长的难度更大。
So, it becomes hard to blindly trust AI, algorithms, and technology. We shouldn’t — it’s a tool. It should augment, not replace, human production, preserving the importance of the critical eye.
因此,很难盲目地信任AI,算法和技术。 我们不应该-这是一种工具。 它应该增加而不是代替人类的生产,同时保持批判眼的重要性。
Preserving the importance of asking ourselves, what is good?
保留自问的重要性,什么是好?
What is a good way to exercise the answers to that question?
锻炼该问题答案的好方法是什么?
翻译自: https://medium.com/@azurite9925/a-reflection-on-man-and-ai-on-judgement-84bbdb01c56c