Reflections on Care and Dissent in the New AI Lexicon

Lumen Database Team
5 min readJul 28, 2021

In the beginning of 2021, the AI Now Institute launched “A New AI Lexicon,” a collection of contributions assessing language about critical AI topics. This includes resituating and retooling common AI buzzwords as well as focusing on underexplored, alternative narratives around AI technology.

So far, the essay contributions have been organized by keyword, like CARE, MAINTENANCE, DISSENT, and OPEN. The essays about AI in “care” and “dissent,” in particular, contain similar critiques against technological determinism and solutionism.

We live today, arguably, in a care-poor and discourse-poor world. People are excluded from supportive communities and systems of care due to medical redlining, biased algorithms in hospitals and other medical establishments, and individualistic, neoliberal ideology; political discourse has tended more towards consensus and misinformation than dissent and rich debate.

Companies and governments have devised ways to foster–or suppress–care and dissent in the digital world; however, as the essays about care and dissent explain, many technological approaches to care and dissent suffer from a misrepresentation of both the problem and possible solutions.

One reason that AI systems which purport to offer care and support dissent fall short of transformative effects is that they favor individualistic solutions over collective interventions. Some care-related interventions, like therapy apps, obscure their institutional priorities and exclude actors who could be beneficial in a system of caregiving. They build unidirectional, alienating relations between media and self that put the onus of seeking help and healing on the individual alone. This phenomenon manifests most clearly in the ideology of self-care. In her care essay, Hannah Zeavin explains:

“… the very person seeking care becomes ultimately responsible for coordinating their own care, whether that be a gratitude list, a step count, or a course of CBT. This can occur as part of corporate wellness plans offered in part to manage employees, or as insurance-related incentives; they become forms of self-help and self-improvement long attached to ideologies of individualism.”

Zeavin points to a technological determinism in which algorithms promise a “cure” that is obtainable through solitary engagement with apps and platforms, “turning the work of care into play and play into the work of care.”

The care championed by AI systems often offers a profit-maximizing, scalable panacea to health crises; however, care should be understood as a complex system of uneven relations between human and non-human actors. Zeavin writes: “Radical care, as Hi’ilei Hobart and Tamara Kneese argue, invites new affective and relational ways of caring mutually, of being for one another that ‘push back against structural disadvantage.’” Being for one another acknowledges the vulnerability, empathy, and codependency in systems of care.

In her essay about dissent, Sareeta Amrute also criticizes algorithmic approaches to online speech that focus on the individual and entrench uneven power relations. Companies like Facebook usually concentrate on the individual’s right to speech when they create policies, shape platform design, or make moderation decisions.

However, Amrute directs our attention to the collective nature of dissent as political practice. Communities of dissenters on online platforms take on disproportionate risk due to the pervasive amounts of surveillance on these platforms. Amrute writes:

“Risk is not shared collectively; it is inequitably distributed across geographies and communities… Protecting dissent primarily through individual choice transfers responsibility for safeguarding dissent to the shoulders of activists….”

If technical solutions to caregiving and protecting expression incorrectly capture the scope of the issue, they also misrepresent the nature of the solution as a technical rather than social one. Technological advancements and “transcendence” in digital health, like open access, obscure how these systems recodify the oppressive practices and discimination of the medical industry, while also creating new sites for surveillance. Zeavin writes:

“AI/machine learning-based care interventions embed and recodify race and gender, whether in the feminization of robots that act as ‘surrogate humans’ in care scenarios, or in the deployment of algorithms that foster the conditions of medical redlining and the further flourishing of white supremacy in medicine in the US context.”

Paired with digital redlining, these kinds of algorithmic health solutions fail to rework the social conditions that exacerbate health crises.

Similarly, some mechanisms to protect speech and support dissent, like accountability audits, are scalable, profit-maximizing, and substantially non-transformative. While accountability audits for algorithms tracking online speech may be useful to an extent, Amrute explains how “accountability curtails dissent by limiting objections to the paradigms of acceptance, adjustment, and informed refusal from an already-operating algorithmic system (Benjamin 2019, Simpson 2014).”

By hiding behind a veil of algorithmic “fairness” or accountability, we fail to ask questions about why we are holding this algorithm accountable; why we are using it and if we should be; who it is targeting; and how we have defined an acceptable outcome. This limited outlook obscures how dissenters online are exposed to more sites for surveillance and harassment.

Furthermore, a potential remediation for harassment, like content moderation, does not always serve the needs of those harmed. Reporting harassment and hate speech–with the result usually being that the post is taken down, or at most, the offender is banned–can re-traumatize the victim and does not remediate the harm done.

In response, some have called for restorative justice as a more effective remedy to online hate speech; the key tool of this approach is communication in order to reckon with and repair harm. Restorative justice asks about the needs of those who have been harmed–like counseling for a victim of online harassment–and whose obligation it is to meet those needs. This approach re-centers the conversation about online hate speech on social conditions and human relations, moving away from technocentric fixes.

A reimagination of the digital ecosystem to support dissent and free expression also involves rejecting a techno-utopian, techno-libertarian conception of information freedom and free speech. Scholar Rodrigo Ochigame makes this distinction in his essay about radical reimaginations of search engine architecture in Brazil and Cuba:

“The Silicon Valley firms that manage public discourse on the internet, such as Facebook, appeal insistently to ‘free speech’ as an excuse for their business decisions to profit from posts and ads that spread right-wing misinformation. The remarkable innovation of the Brazilian liberation theologians is that they moved beyond a narrow focus on free speech and toward a politics of audibility.”

Ochigame argues for equitable conditions for speaking and being heard. Indeed, technological systems should support diverse narratives and imaginations of dissent, and ensure that these voices are carefully supported and safely heard, rather than silenced or surveilled for discriminatory purposes.

Ultimately, successfully reconciling dissent and care in digital ecosystems will be rooted in understanding and reworking social conditions and relations, rather than scaling irreciprocal technological fixes.

Gina Markov is a rising senior at Yale University pursuing a bachelor’s degree in Applied Mathematics. Gina is passionate about bridging technology, policy, and ethics, and she is spending the summer as an intern for the Lumen database, doing work related to privacy, copyright, and online speech. In her free time, she enjoys long distance running and hiking, and explores interests in science fiction, antitrust, and francophone literature.

--

--

Lumen Database Team

Collecting and facilitating research on requests to remove online material. Visit lumendatabase.org and email us if you have questions.