科技 人文
Machine learning-backed personalized services have become a permanent fixture in our increasingly digital lives. Personalization relies on vast quantities of behavioral big data (BBD): the personal data generated when humans interact with apps, devices, and social networks. BBD are the essential raw material of our digital representations.
中号 achine学习支持的个性化服务已经成为我们日益数字化生活的一个固定位置。 个性化依赖于大量的行为大数据(BBD) :当人们与应用程序,设备和社交网络进行交互时生成的个人数据。 BBD是我们数字表示形式的基本原料。
As life steadily moves online, the digital representations of persons take on legal and moral importance. Influential European legal theorists and philosophers have even written an Onlife Manifesto, shaping the discourse around what it means to be human in the digital age. At the same time, the IEEE has articulated a vision of Ethically Aligned Design (EAD) that empowers “individuals to curate their identities and manage the ethical implications of their data.”
随着生活在网络上稳步发展,人的数字表示法在法律和道德上都具有重要意义。 有影响力的欧洲法律理论家和哲学家甚至撰写了《 Onlife Manifesto》 ,围绕数字时代对人类的意义进行论述。 同时,IEEE阐明了“ 道德统一设计 (EAD)”的愿景,该愿景使“个人能够确定其身份并管理其数据的道德含义”。
But what would such a design mean for recommender systems, for instance? And what’s the point of giving people this power? What underlying notions of the human person are driving this kind of thinking? That’s what we want to unpack.
但是,例如,这种设计对推荐系统意味着什么? 赋予人们这种权力的意义何在? 人的哪些基本概念正在推动这种思维? 那就是我们要解压的东西。
我们的愿景:人文个性化 (Our Vision: Humanistic Personalization)
We introduce our notion of humanistic personalization as a way for thinking about digital identity and personalization that draws on the fundamental ethical values embodied in the EU’s General Data Protection Regulation (GDPR). Humanistic personalization looks first at which capacities make the human person unique and then tries to imagine what recommender systems and personalization would look like if they were to support these capacities.
我们引入人性化的个性化概念,作为一种思考数字身份和个性化的方式 ,它借鉴了欧盟《 通用数据保护条例》 (GDPR)中体现的基本道德价值观。 人文个性化首先考虑哪些能力使人具有独特性,然后尝试想象如果推荐系统和个性化支持这些能力,他们将看起来像什么。
Our notion of humanistic personalization envisions a shift from an implicit, behavior-based representation paradigm, dominated by our “organismic” interests, to one centered on conscious, explicit and “reflective” feedback through the notion of dialogic narrative construction between data controllers and data subjects. Humanistic personalization is inspired by the philosophical ideas of Kant, Hegel, Habermas, Ricoeur, Derrida and others.
我们对人性化个性化的想法设想从隐性的 ,基于行为的表示范式转变为以有意识, 显性和“反思性”为中心, 隐式的 ,基于行为的表示范式以我们的“有机”利益为主 通过数据控制器和数据主体之间对话性叙事概念的反馈。 人性化的个性化受到康德,黑格尔,哈贝马斯,理高,德里达等人哲学思想的启发。
Beyond personalization, a focus on narrative could have wide-ranging consequences for the future of AI/ML. If we are to ever “crash the barrier of meaning in AI,” we will need to also crash through the barrier of narrative. Further, the inherent intelligibility of narrative could be useful in the emerging area of “user-centric” explainable AI, especially where regulations such as the GDPR give data subjects rights to clear, understandable explanations of algorithmic decisions. Lastly, due to its intuitive “explanatory force,” narrative explanation could serve as a interesting lens for new approaches in causal modeling.
除了个性化之外,对叙述的关注可能会对AI / ML的未来产生广泛的影响。 如果我们要“ 冲破AI的意义障碍 ”,我们也将需要冲破叙事的障碍。 此外,叙事的固有可理解性在新兴的“以用户为中心”可解释的AI领域可能会有用,尤其是在GDPR之类的法规赋予数据主体有权对算法决策进行清晰,易懂的解释的地方。 最后,由于其直观的“解释力”,叙述性解释可以作为因果建模新方法的有趣镜头。
叙事准确性与认知不公 (Narrative Accuracy and Epistemic Injustice)
We offer the concept of narrative accuracy as an orienting design goal for personalization. By maximizing the narrative accuracy of both the personal data used as input to the recommender system and the resulting recommendations themselves, we can reduce the epistemic injustice done to persons via personalization.
我们提供叙述准确性的概念作为个性化设计的目标。 通过最大化用作推荐系统输入的个人数据和由此产生的推荐本身的叙述准确性,我们可以减少通过个性化对人造成的认知上的不公正 。
Epistemic injustice is multifaceted. It refers to the way we “distribute” credibility to a person’s truth claims in an unfair way that devalues them in their capacity as a knower. It can also mean that we lack the conceptual resources to understand the experience of others. Why focus on narrative accuracy and its complement, epistemic injustice? Because
认识不公正是多方面的。 它指的是我们以不公平的方式“分配”信誉给一个人的真理主张的方式,这种可信度贬低了他们作为知识者的能力。 这也可能意味着我们缺乏概念性的资源来理解他人的经验。 为什么要关注叙述的准确性及其补充,认识论的不公正? 因为
we believe the concept of narrative to be a crucial feature of human experience worth protecting. At the same time, we reject the Enlightenment ideal of one single, universal “method” for settling questions of truth.
我们认为,叙事概念是值得保护的人类经验的重要特征。 同时,我们拒绝一个单一的,普遍的“方法”来解决真理问题的启蒙理想。
In other words, achieving a completely objective “view from nowhere” is not possible. Instead, knowledge gains in robustness as we fuse diverse input from diverse perspectives. If you’d like to read more details, particularly about the philosophical anthropology of the GDPR, see our working paper Beyond Our Behavior: The GDPR and Humanistic Personalization.
换句话说,不可能实现完全客观的“无处可见”。 取而代之的是,随着我们从不同角度融合不同的输入,知识会变得更加健壮。 如果您想阅读更多详细信息,尤其是有关GDPR的哲学人类学的信息,请参阅我们的工作论文超越行为:GDPR和人文个性化 。
那么什么是个性化? (So What is Personalization, Anyway?)
Personalization is huge business. Netflix, for example, claims to save $1B per year due to its personalization efforts. Here’s Facebook describing how it uses your personal data to “personalize” your experience on the platform under its new TOS:
个性化是一项巨大的业务。 例如, 由于个性化工作 ,Netflix 声称每年节省$ 1B 。 这是Facebook,它描述了它如何在新的TOS下使用您的个人数据“个性化”平台上的体验:
Your experience on Facebook is unlike anyone else’s: from the posts, stories, events, ads, and other content you see in News Feed or our video platform to the Pages you follow and other features you might use, such as Trending, Marketplace, and search. We use the data we have — for example, about the connections you make, the choices and settings you select, and what you share and do on and off our Products — to personalize your experience.
您在Facebook上的体验与其他人不同:从新闻订阅源或我们的视频平台中看到的帖子,故事,事件,广告和其他内容,到您遵循的页面以及可能使用的其他功能,例如趋势,市场和搜索。 我们使用我们拥有的数据(例如,关于您建立的连接,选择的选择和设置以及您在我们的产品上和下的共享和做的事情)来个性化您的体验。
为什么我们建议在GDPR中扎根AI道德 (Why We Suggest Grounding AI Ethics in the GDPR)
Simply put, we believe the GDPR serves double duty as both legal norm and ethical foundation for AI/ML. The main reason is we don’t think it’s productive to add to the current morass of competing principles, guidelines, and frame-works for Ethical AI/ML. One paper alone lists at least 84 examples of “AI Ethics” guidelines, most of which are less than 4 years old. No one can make sense of all this.
简而言之,我们认为GDPR既是AI / ML的法律规范又是道德基础的双重职责。 主要原因是,我们认为,在道德AI / ML的竞争原则,指南和框架的当前困境中添加新的想法并不有效。 仅一篇论文就列出了至少84个“ AI道德规范”指南示例 ,其中大多数都不到4年。 没有人能理解所有这一切。
Further, tying ethical principles to legal norms via the GDPR is valuable because, in principle, the GDPR applies to any data controller — anywhere in the world — processing the personal data of data subjects residing in the EU. Law is institutionally-backed to achieve compliance via the state’s monopoly on the legitimate use of physical force. So ethics (potentially) backed by force is, in our view, much more likely to foster compliance by industry and researchers around the globe.
此外,通过GDPR将道德原则与法律规范联系起来很有价值,因为从原则上讲,GDPR适用于世界上任何地方的任何数据控制者 ,处理欧盟中数据主体的个人数据。 法律受到制度的支持,以通过国家对合法使用武力的垄断来实现合规。 因此,在我们看来,由武力支持的伦理(潜在地)更有可能促进全球行业和研究人员的遵守。
信息自决与人格权 (Informational Self-determination and the Right to Personality)
The rights given to data subjects under the GDPR reflect a certain European understanding of the human person. These principles are valuable because they have withstood intense philosophical scrutiny over centuries. From the European perspective, data protection and privacy are tools aimed at preserving human dignity. If you’re really interested in the details, we explore the philosophical thought behind what makes the human life valuable in our paper linked to above.
GDPR赋予数据主体的权利反映了欧洲对人的某种理解。 这些原则之所以有价值,是因为它们经受了几个世纪的严格的哲学审查。 从欧洲的角度来看,数据保护和隐私是旨在维护人类尊严的工具。 如果您真的对这些细节感兴趣,我们将在上面与本文相关的文章中探讨使人类生活变得有价值的背后的哲学思想。
In any case, two key notions underlie the ethical foundations of the GDPR: informational self-determination and its predecessor, the right to the free development of one’s personality. According to legal scholars Antoinette Rouvroy and Yves Poullet, informational self-determination is defined as “an individual’s control over the data and information produced about him,” and is a precondition for any kind of human self-determination. There is a political dimension to self-determination as well, as a cooperative, democratic society depends on its citizens having the capacity for self-determination. In our paper, we connect these ideas with Jürgen Habermas’ notion of communicative reason.
无论如何,GDPR的道德基础是两个关键概念: 信息自决及其前身,即人格自由发展的权利 。 根据法律学者安托瓦内特·鲁夫罗伊( Antoinette Rouvroy)和伊夫·普勒(Yves Poullet),信息自决被定义为“个人对所产生的有关他的数据和信息的控制权”,并且是任何人类自决的前提。 自决也有政治层面,因为合作社制的民主社会取决于其公民具有自决能力。 在我们的论文中 ,我们将这些思想与JürgenHabermas的交际理性概念联系起来。
我们的数字行为在根本上被误解了 (Our Digital Behavior is Fundamentally Misinterpreted)
We assert that it is not the case that personalization is so incredibly accurate but that we, as linguistic, social, and physically-embodied animals, have deceived ourselves as to our potential for free movement and thought.
我们断言,个性化并非如此精确,但我们作为语言,社会和身体上的动物,已经欺骗了我们自由移动和思考的潜力。
Digital environments limit and constrain what is humanly possible even further. Perhaps even worse, as degrees of freedom in digital environments are reduced, hermeneutic¹ problems of human action arise and present ethical problems. For one, the behaviorist assumptions behind the collection of implicit data fail to appreciate an important caveat: the meaning of digital behavior is fundamentally under-determined. Because humans are conscious, intentional beings, each action we initiate can be seen from an external/physicalist or an internal/phenomenological perspective.
数字环境进一步限制和约束了人类可能实现的目标。 也许更糟糕的是,随着数字环境中自由度的降低,人类行为的诠释性 ¹问题出现并带来了道德问题。 首先,隐式数据收集背后的行为主义假设未能引起人们的重要注意: 数字行为的含义在根本上是不确定的 。 因为人类是有意识的,有意的人,所以我们从外部/物理学家或内部/现象学的角度可以看到我们发起的每个动作。
When complex behaviors (e.g., complete a transaction) are broken down into overly-narrow “sub-symbolic” categories² (e.g., clicks, mouse trajectories, and other “microbehaviors”) by BBD platforms and data scientists, intentions become decoupled from results. What is more, a clear one-to-one mapping of intentions to actions becomes impossible. One cannot intend to do what one cannot first identify.
当BBD平台和数据科学家将复杂的行为(例如,完成交易)分解为过窄的“子符号”类别²(例如,单击,鼠标轨迹和其他“微行为”)时, 意图与结果脱节。 而且,将意图与行动进行清晰的一对一映射变得不可能。 一个人不能打算做一个首先不能确定的事情。
As psychologists Vallacher and Wegner put it,
正如心理学家Vallacher和Wegner所说,
As philosophers have long noted, any segment of behavior can be consciously identified in many different ways. Something as simple as ‘meeting someone’… could be identified by anyone with an even mildly active mental life as ‘being social,’ ‘exchanging pleasantries,’ ‘learning about someone new,’ ‘revealing one’s personality,’ or even ‘uttering words.’
正如哲学家早已指出的那样,可以通过许多不同方式有意识地识别行为的任何部分。 诸如“与某人会面”之类的简单事物……可以被那些精神生活较为活跃的人识别为“正在社交”,“交流愉悦感”,“学习新人”,“展现个性”甚至是“说出话”。 。”
So misinterpretation (or lack of complete interpretation) is baked into digital life and is worsened when automated systems can dynamically change digital environments in real-time, such as with reinforcement learning-based recommender systems used by Facebook. We thus face a crisis of interpretation. What to do?
因此,误解(或缺乏完整的解释)会渗透到数字生活中,当自动化系统可以实时动态更改数字环境时(例如使用Facebook使用的基于强化学习的推荐系统) ,这种误解就会加剧。 因此,我们面临着解释的危机。 该怎么办?
If we follow the GDPR, we let the data subjects themselves decide.
如果遵循GDPR,我们让数据主体自行决定。
通过叙事身份联系道德和社会身份 (Connecting Moral and Social Identity via Narrative Identity)
Acclaimed psychologist and linguist Michael Tomasello contends that our membership in a linguistic community binds our social and moral identities. The reasons we give for our behaviors are related to our role and status within this community. From a young age, children must make decisions about what to do and which moral and social identities to form. Children make these decisions in ways justifiable both to others in their community and to themselves.
著名的心理学家和语言学家迈克尔·托马塞洛 ( Michael Tomasello)认为,我们在语言社区中的成员身份约束着我们的社会和道德认同。 我们给出行为的原因与我们在该社区中的角色和地位有关。 从很小的时候开始,儿童就必须做出决定以及要形成的道德和社会身份的决定。 孩子们以合理的方式对社区中的他人和自己做出这些决定。
Social and the moral identity are connected through the communicative, reason-giving process to others. Tomasello claims this process became internalized as a form of normative self-governance, making up our moral identities. Our psychical unity requires we do certain things in order to continue to be the persons we are, seen from both the inner perspective (self, private) and outer (other, public). This epistemic gap between inner and outer perspectives on the same event is what drives epistemic injustice, which we discuss below.
社会和道德认同通过交流,给出理性的过程与他人联系在一起。 托马塞洛声称,这一过程已被内部化为一种规范的自我治理形式,从而构成了我们的道德认同。 从内在的角度(自我,私人)和外在的角度(其他,公共)来看,我们的心理统一性要求我们做某些事情,以便继续成为我们自己的人。 同一事件的内在观点和外在观点之间的认知鸿沟是造成认知不公正的原因,我们将在下面讨论。
Moral and social identities are synchronic (cross-sectional) structures. They are how we represent ourselves to ourselves at particular points in time. But we have not yet explained how these identities evolve over time. For that, we need a diachronic (longitudinal) account of identity.
道德和社会认同是共时的(横断面)结构。 它们是我们在特定时间点向自己展示自己的方式。 但是我们还没有解释这些身份如何随着时间演变。 为此,我们需要一个历时性(纵向)身份证明。
叙事身份:随着时间的流逝,赋予您生命意义的是它 (Narrative Identity: It’s What Gives Your Life Meaning Over Time)
According to the psychologist Jerome Bruner, narratives are the instruments through which our minds construct reality. It’s worth pointing out some of their unique features that capture the human experience in all its messy and imperfect glory.
根据心理学家杰罗姆·布鲁纳 ( Jerome Bruner)的说法,叙事是我们的思想建构现实的工具。 值得指出的是,它们具有一些独特的功能,这些功能以其凌乱和不完美的荣耀来捕捉人类的体验。
Diachronicity: narratives account for sequences of ordered events over human time, not absolute “clock time.”
历时性 :叙事说明了人类时间中的有序事件序列,而不是绝对的“时钟时间”。
Particularity: narratives are accounts of temporally-ordered events told from the particular embodiment of their narrator(s).
特殊性 :叙事是从其叙述者的特定实施方式说明的时间顺序事件的说明。
Intentional state entailment: within a narrative, reasons are intentional states (beliefs, desires, values, etc.) which act as causes and/or explanations.
有意状态蕴含 :在叙述中,原因是有意状态(信念,欲望,价值观等),它们充当原因和/或解释。
Hermeneutic composability: gaps exist between the text and the meaning of the text. Meaning arises from under-standing relations of parts to whole.
解释的可组合性 :文本和文本含义之间存在差距。 意义源自对整体的理解关系。
Referentialility: realism in narrative derives from consensus, not from correspondence to some “true” reality.
指称性:叙事中的现实主义来自共识,而不是与某些“真实”现实的对应。
Context sensitivity and negotiation: readers of a text “assimilate it on their own terms” thereby changing themselves in the process. We negotiate meaning via dialogue.
上下文敏感性和协商 :文本的读者“按照自己的意愿同化”,从而在过程中改变自己。 我们通过对话来谈判含义。
Through the diachronicity of narrative, we unite our moral and social identities over time, giving rise to the uniqueness of persons.
通过叙事的历时性,我们随着时间的流逝将我们的道德和社会身份统一起来,从而产生了人的独特性。
叙事准确性与认知不公 (Narrative Accuracy and Epistemic Injustice)
Epistemology is the study of knowledge and its foundations. We adapt Miranda Fricker’s concept of epistemic injustice and use it to shine new light on the problem of narrative accuracy in personalization.
认识论是对知识及其基础的研究。 我们采用了米兰达·弗里克(Miranda Fricker)的认知不公正概念,并将其用于个性化叙事准确性问题上。
Fricker is interested in injustice as it relates to disrespecting someone in her “capacity as a knower.” Epistemic injustice essentially reduces one’s trust in one’s own judgment and ability to make sense of one’s lived experience. There are Kantian and Hegelian aspects to epistemic injustice. Notably, epistemic justice requires a mutual recognition of the perspective and experience of others, particularly those in positions of asymmetrical epistemic power (i.e., data subjects relative to data collectors).
弗里克(Fricker)对不公正很感兴趣,因为不公正与在“认识者的能力”中不尊重某人有关。 认识论上的不公正实质上降低了人们对自己的判断力的信任,也降低了人们理解自己生活经验的能力。 认识论不公正有康德和黑格尔方面。 值得注意的是,认知正义需要相互认可他人的观点和经验,尤其是处于非对称认知能力位置(即相对于数据收集者的数据主体)的观点和经验。
证词不公 (Testimonial Injustice)
There are two dimensions of epistemic injustice applicable to the case of data subjects receiving personalized recommendations. First, testimonial injustice might occur when prejudice or bias leads a data collector to give a “deflated level of credibility” to a data subject’s interpretation of a recorded action or event, including a recommendation.
认知不公正有两个维度,适用于数据对象接受个性化推荐的情况。 首先,当偏见或偏见导致数据收集者对数据主体对记录的动作或事件(包括建议)的解释给予“降低的信誉度”时,可能会发生见证不公 。
For example, if a data collector only uses non consciously-generated BBD and does not weight explicit feedback, a kind of testimonial injustice has occurred. Another example might be that a BBD platform allows users to “downrate” bad recommendations, but these are not actually factored into changing the recommendations.
例如,如果数据收集器仅使用无意识生成的BBD,并且不加权显式反馈,则会出现一种证词不公。 另一个例子可能是BBD平台允许用户“降低”不良建议,但实际上这些因素并未影响更改建议。
From the standpoint of Bayesian model averaging, we can also conceive of testimonial injustice as when uncertainty in model selection (ignoring the subjective “model” of the data subject) is ignored in favor of the pre-defined model of the data collector or processor.
从贝叶斯模型平均的观点来看,当模型选择中的不确定性(忽略数据主体的主观“模型”)被忽略而有利于数据收集器或处理器的预定义模型时,我们还可以认为证明不公正。
解释学上的不公正 (Hermeneutical Injustice)
Hermeneutical injustice may arise when a data collector or data collection platform lacks the “interpretive resources” to make sense of the data subject’s lived experience, thereby putting him at a disadvantage. The fundamental question is, what counts as what?
当数据收集器或数据收集平台缺乏“解释性资源”来理解数据主体的实际经历时,可能会产生解释上的不公正 ,从而使他处于不利地位。 基本问题是, 什么算什么 ?
Under one interpretation of an event, we may generate statistical regularities, while under another we may get different statistical regularities which become encoded in the parameters of ML models. It follows there is no one “best’’ representation or encoding of BBD.
在事件的一种解释下,我们可能会生成统计规律,而在另一种情况下,我们可能会得到不同的统计规律,这些规律被编码在ML模型的参数中。 因此,没有任何一种“最佳”的BBD表示或编码。
There are simply different representations under different interpretations about what counts as what.
关于什么算什么,在不同的解释下只是存在不同的表示。
Currently, the categories of events recorded by BBD platforms are typically pre-defined by system designers without any input from platform users, for instance. If designers of recommender systems do not consider the diversity and richness of data subjects’ intended actions, values, and goals while using the system, hermeneutical injustice will be unavoidable.
当前,BBD平台记录的事件类别通常由系统设计人员预先定义,例如,没有平台用户的任何输入。 如果推荐系统的设计者在使用系统时不考虑数据主体预期行为,价值和目标的多样性和丰富性,那么解释上的不公正将不可避免。
人文个性化的未来 (The Future of Humanistic Personalization)
Making and sustaining a coherent digital self-narrative is a uniquely human capacity which we cannot leave up to others or outsource to automated agents. This sentiment is shared by the GDPR and IEEE EAD principles. We are characters in the stories we tell about ourselves. We know which events define us, we know which values drive us, we know the causes (reasons) behind our actions. And if we do not, we have the capacity to try to find out.
制作和维持连贯的数字自我叙事是一种独特的人力能力,我们不能任由他人或外包给自动化代理。 GDPR和IEEE EAD原则对此观点表示赞同。 我们是讲述自己的故事中的人物。 我们知道哪些事件定义了我们,我们知道了哪些价值观驱动着我们,我们知道了行为背后的原因(原因)。 如果我们不这样做,我们就有能力设法找出答案。
The corporate owners of BBD collection platforms and data scientists may make claims to the contrary based on their statistical analyses of our observed behaviors, but we believe that rights to informational self-determination trump these assertions.
BBD收集平台的公司所有者和数据科学家可能会基于对我们观察到的行为的统计分析而提出相反的主张,但我们认为,信息自决权胜过这些主张。
As postmodernists have pointed out, problems of ethics and interpretation are inseparable. What we believe to be true influences our decisions about what is right. But if meaning is socially constructed, data subjects alone cannot solve these problems. It will take both a community and good faith communication to work out the “rules” of our common language game. Data scientists will need to play a larger role in this dialectic of meaning negotiation and identity formation in the digital sphere. After all, if the original meaning of category is to “publicly accuse,” the data subject, as a member of the public, should play a part in that process.
正如后现代主义者所指出的那样,道德和解释问题是密不可分的。 我们认为真实的事物会影响我们对正确事物的决策。 但是,如果意义是在社会上建构的,那么仅数据主体就无法解决这些问题。 社区和真诚的交流都需要制定出我们通用语言游戏的“规则”。 数据科学家将需要在数字领域中意义协商和身份形成的辩证法中发挥更大的作用。 毕竟,如果类别的原始含义是“公开指控”,那么数据主体作为公众成员应在该过程中发挥作用。
Skeptics might counter that optimizing for narrative accuracy will require a trade-off in the ability of recommender systems to accurately recommend items and predict specific behaviors. Business profits may also be affected. Nevertheless, the GDPR forces us to ask the question:
怀疑论者可能会反驳说,优化叙事准确性将需要权衡推荐系统准确推荐商品并预测特定行为的能力。 营业利润也可能受到影响。 但是,GDPR迫使我们提出以下问题:
Do we ultimately wish to represent ourselves according to the needs and interests of business, or humans?
我们最终是否希望根据企业或人类的需求和利益来代表自己?
[1] Hermeneutics was originally about the study of methods of interpretation of biblical texts, but was re-invented as an epistemological method by philosophers in the 20th century.
[1] 诠释学最初是关于圣经文本解释方法的研究,但在20世纪被哲学家重新发明为一种认识论方法。
[2] Originally in Greek, the word “category” meant something like “to publicly accuse.” Notice the role played by social consensus.
[2]最初在希腊语中,“类别”一词的含义类似于“公开指控”。 注意社会共识所扮演的角色。
翻译自: https://towardsdatascience.com/confronting-epistemic-injustice-with-humanistic-personalization-6a6cf40d22aa
科技 人文