AI, whiteness, and capitalism: interview with Yarden Katz

Yarden Katz is a departmental fellow in Systems Biology at Harvard Medical School. He has PhD in cognitive sciences at MIT, and investigates intersections between cognition and biology. He is also interested in history, philosophy and politics of science. In 2020, he launched the book Artificial Whiteness: politics and ideology in artificial intelligence

In an interview with DigiLabour, Yarden Katz talks about:

DIGILABOUR: The vision of artificial whiteness entangled with empire and capital seems obvious, but your book reveals something much more powerful. In what ways does your work differ from AI bias and AI diversity frameworks, for example, in relation to race?

YARDEN KATZ: To begin with, I refuse to take “AI” as a ready-made concept. The book challenges the assumption that “AI” is revolutionizing everything and must be analyzed or managed as if it’s a coherent, technological force. I also don’t view “AI” as a technology in the liberal sense, meaning some technical instrument (a computing technique, in this case) with multiple uses.

So, this book isn’t about the “ethics and bias of AI,” but rather about how industries like the one around the “ethics and bias of AI” came to be. Why did these initiatives and discourses become so popular when they did? Why is everyone from corporations to militaries to repressive governments to journalists and various nonprofit groups so interested? What agendas are served by this preoccupation with “AI”? And which conversations are silenced by this loud industry?

As I argue in the book, the framework of bias serves state and corporate power. It obscures historical and institutional violence and instead directs our attention to “machines” or “algorithms.” For example, rather than ask why there is a vast public-private industry around prisons and incarceration (the prison-industrial complex) in the first place, the AI experts ask: Are these systems “biased” against Black people? And is that better than the so-called “artisanal” judgments of court judges?

The turn to bias hides the violence, the institutions that enable it, and their long history – a history that predates computers, obviously.

The bias framework is good for the class of professional experts who want their share of the big AI pie. They can raise lots of money for research on the bias of AI and get media attention for the work. Since the concept is, and as I argue has always been, nebulous and shifting, you can stick “AI” with anything. So, the experts can say they’re doing “AI and X” (where X could be policing, journalism, national security, environment, etc.). And this has led to many small industries around the “bias,” “fairness,” and “transparency” of AI in the academic and policy circles that corporations like Google and Microsoft have long dominated. Experts can debate whether this algorithm is more or less “fair” than that one, without asking whether computing should have a role at all, let alone questioning the institutional envelope in which computing is applied. These discourses dance around the obvious truth that algorithms are inert; they can’t jump out of the computer and throw someone in prison. And computers can always be unplugged.

The preoccupation with bias, then, distracts from foundational questions such as: why are we talking about “AI” so suddenly? And why is every nefarious foundation, corporation, and government so interested in funding all these AI centers and experts? And is bias a useful frame at all, or does it conceal more than it illuminates?

To your question about race: my broader argument in the book is that AI itself, as a concept and set of discourses, is flexible and changing, much like the ideology of whiteness. I build on Cedric Robinson’s idea that societies based on white supremacy (or more generally, what he calls “racial regimes”) are a “makeshift patchwork” – continually adapting to new conditions and challenges from social movements.

AI, in my view, should be seen through this lens. AI is nebulous and shifting, much like racial hierarchies and categories, continually adapted by experts to meet the aims of empire and capital in ways that uphold white supremacy. What counts as “AI” changes considerably if we look at it terms of how computing systems are built, but the practice of presenting AI as something that can serve imperial and capitalist interests is constant. In addition, throughout its history, AI has produced racialized, gendered, and classed models of the human self, much like earlier racial sciences (whether anthropometry, phrenology, or eugenics). And AI inherits all the racist flaws of earlier conceptions of “intelligence” (brilliantly critiqued by writers such as Stephen Jay Gould). At the same, through the language of computing, AI has also pioneered new ways of telling us what the “universal” intelligent human must be.

This is why I argue that AI is a technology of whiteness in two senses: it is a tool that serves a social order based on white supremacy, but it also mimics the form of whiteness as an ideology in being dynamic, shifting, and incoherent at its core.

DIGILABOUR: What is the role of the university in rebranding AI in a context of neoliberalism and “entrepreneurial science”? How can we struggle against this?

KATZ: Universities play an important role in advancing neoliberal visions, and the university itself has been undergoing neoliberal restructuring, much like other institutions. By the neoliberal model, a scientist or an academic should be an “entrepreneur”: an individual portfolio-builder, seeking to maximize his impact by catering to the needs of the market, and constantly forging new partnerships across state, corporate, and academic lines.

The way the university has taken up “AI” is a prime example of this entrepreneurial spirit.

“AI” gives fresh gloss for work that was already taking place under labels such as “big data.” For entrepreneurial academics, this rebranding was an opportunity to obtain funding and create new centers in partnerships with corporations, the military, and the state. It was also a useful way to tame the emerging public understanding that what companies were calling “big data” (promising it’ll democratize the world or whatever) was actually a system of mass surveillance and control – as Edward Snowden’s leaks in 2013 have helped to revealed. The turn to “AI” temporarily distracted from this critical perspective, and the media could instead recycle clichés about robots.

Universities profited from this rebranding because it allowed for new funding opportunities from corporations and billionaire donors, as well as new avenues for publications and influence, all under the banner of “AI.” (Though when I say universities profit, it’s really university elites that profit; not the students or the surrounding community.) So, universities have helped to create the sense that “AI” is a new force that is reshaping the world, while the initiatives around AI embody the entrepreneurial model of knowledge production. At the same time, in the work that comes out of these corporate-academic initiatives, “AI” is often used a pretext for prescribing neoliberal policies elsewhere – for instance, concerning labor or the welfare state.

How to struggle against this? It’s an important question. The university as it stands is basically designed to prevent what Harney and Moten call “study” from happening. Among those who insist on study, there are some interesting discussions about abolishing the university. These moves are inspired by an abolitionist tradition. The idea, as I see it, is to eliminate the conditions out of which the university that we now have – deeply entwined with war, imperialism, racism, and exploitation – grew, and to dismantle the university’s own systems of policing, ranking, and individuation.

DIGILABOUR: What are “carceral-positive logics” and what is the role of institutes like AI Now and Data & Society in reproducing these?

KATZ: Carceral-positive logic is a logic that doesn’t fundamentally challenge the carceral state but instead gives it a progressive veneer. It’s different from what most people would think of as “pro-carceral” discourses, such as Donald Trump calling for “law and order” and for political protesters to be beaten up, or Jair Bolsonaro longing for the dictatorship days in Brazil. All that is overtly pro-police, pro-prisons.

Carceral-positive logic, on the other hand, would include things like making companies’ facial recognition software “less biased” against marginalized communities – while presenting this a social justice project that tries to be “intersectional” with respect to race, gender, and class. Whatever the intentions of individuals doing this work, this argument has harmful consequences. It suggests that systems can be more equitable or just by making sure that computing products – built for incarceration and surveillance – are better at recognizing those who are most criminalized. Indeed, Google has preyed on unhoused Black people in Atlanta in order to collect less racially biased data, and a company providing facial recognition to the Israeli government for tracking Palestinians has made a similar argument for its products. So, corporations are embracing this bias argument, and academics working alongside corporations have framed bias reduction as a step towards social justice.

The think tanks that you mentioned contribute to this line of work (though they are by no means the only ones). It’s worth noting that such centers, which dominate the discussion on “AI” and “big data,” are basically corporate constructions. They are embedded within and sponsored the corporate computing world, a world they’re ostensibly there to critique. Their culture, tactics, and methods are corporate. These centers offer us the same old neoliberal model of “partnerships” and “stakeholders” according to which linking arms with corporations and the state is somehow always beneficial. So while they recently claim to be fighting for social justice, their work, omissions, and analyses tell a different story.

When it comes to carceral state violence, these centers’ proposals don’t question the carceral system but in practice even imply augmenting it – for example, by calling for more data to be collected and for the data to be “audited” by interdisciplinary experts. These proposals extend the experts’ sphere of influence: since AI is apparently used everywhere, the new AI experts must be called in to audit, supervise, curate, and manage these applications. And true to the neoliberal model, these centers foster more connections between academia, governments, and industry. All this pretty much guarantees that the prison-industrial complex won’t be challenged.  

Another problem with these centers is how they co-opt and hijack activist initiatives, drowning out the voices of activists when it comes to topics such as incarceration. My book was written before last summer’s immense uprising against carceral state violence and for Black lives, prior to the murders of George Floyd and Breonna Taylor. In many ways, the problem of co-optation has gotten worse since.

This is a point where we can see how AI shifts like Robinson’s racial regime, adapting in response to social struggles. So initially, there was much talk about how “AI” will free us from labor and make everybody rich. That is still going on, but then the “ethics” layer also got added, alongside with abstract debates about robots replacing people, or ethical conundrums arising from self-driving cars (and other problems that philosophers enjoy). We still hear a lot about “ethics,” but later “bias” took a more prominent place – especially “racial bias” and “gender bias.” Now, as global struggles against incarceration intensify, the talk of “bias” is losing traction because it’s clearly inadequate for describing this violent world – and so the AI experts are turning to “power.” So most recently, it’s not about being “ethical” or “unbiased,” but rather about the distribution of “power.” But simply replacing “bias” with “power” doesn’t change anything if you don’t raise the more fundamental questions about AI and the institutions that sustain it. I view all of these examples as adaptations of AI.

To give a related example of adaptation: recall that the experts spoke of the “racial bias” in facial recognition systems and initially called for those to be “fixed” – so that the system wouldn’t be biased against, say, Black women. But it was in fact activists in various cities who said: No, we don’t want facial recognition at all, we want to ban it. Then the experts had to change their tune and support the movement for the ban.

However, this change in the experts’ attitudes has limits. The Electronic Frontier Foundation (EFF), for example, still refuses to support a ban on “private” uses of facial recognition software. The EFF’s language is telling: they write that facial recognition software can amplify “carceral bias” (and insist on the opportunities offered by “emerging technologies”). What is being challenged here, again, is not the system of incarceration, but rather the methods by which the state decides which people are thrown in cages.

The point is that with enough resistance, the experts from the nonprofit world can be forced to adapt. Adaptation can also mean co-opting activist movements. Recently, experts in the US are jumping on the “abolition” label, after decades of activism have made it more palatable in the mainstream.

There is brilliant work by groups such as INCITE on how this “nonprofit-industrial complex” – consisting of foundations (like the Ford Foundation), universities, and nonprofit organizations – has co-opted radical social movements. The think tanks devoted to “AI” or “data” in some ways represent the worst features of the nonprofit-industrial complex. These centers are quite removed from anything real. They’re accountable to no one except their funders and the elites serving on their boards. And that’s why they can only commit to neutral-sounding generalities, like studying the “social implications of technology.”

There’s a lot to be learned here from the groups that have grappled with the contradictions of operating within the nonprofit-industrial complex. As Ruth Wilson Gilmore has argued, “If contemporary grassroots activists are looking for a pure form of doing things, they should stop.” There is no “organizational structure” that is inherently tuned towards liberation. Essentially everyone working in these spaces is complicit (to varying degrees, of course). The question posed by groups such as INCITE are: how do you keep true to an abolitionist tradition and process, and accountable to a community, while working in these compromised structures? How do you navigate the contradictions? These activists constantly raise these questions and constantly renegotiate the terms of their engagement. This is a level of thoughtfulness that one doesn’t find in these “AI” centers precisely because these centers are outgrowths of the corporate-state-military sphere of computing.

DIGILABOUR: You describe three epistemic forgeries within AI’s histories. One is the representation of a universal “intelligence” unmarked by social contexts and politics. And you also claim, like other research, that human labor on platforms like Amazon Mechanical Turk and others is not universal. We also see this issue in research on Brazilians on these platforms. Can you tell us more about the relationships between human labor, coloniality and “universality” ideology in AI?

KATZ: There are several layers to this.

First, as we know, these computing systems rely on human labor. There’s the exploited labor that goes into generating the data set, a point that has received a lot of attention. The reliance on human labor is also the place where the idea of “universality” collapses. That is, depending on who annotates the data, the computing system will produce different results; social context matters. In addition to that, there’s also the (sometimes forced) labor that goes into creating computing systems and gadgets in the first place.

Second, the major transnational corporations can extract this labor and resources because of existing imperial-colonial relations. There is important work being done on this, showing how colonial relationships continue to manifest through computing systems and services. For example, corporations such as Google take over local industries and force their products and services upon the global South, extracting wealth and exerting influence in a way that wouldn’t be possible without the existing colonial-imperial relations between global North and South, or between the US much of the rest of the world.

Finally, and ironically, the recent wave of “ethical AI” is promoted and sponsored by some of worst profiteers from imperial and colonial projects, whether it’s Stephen Schwarzman of the Blackstone Group, or the consulting firm McKinsey, or even Henry Kissinger. They promise to “do good” for the world with “ethical AI,” and elite universities embrace these sponsors and their plans. This configuration helps to ensure that whatever goes on in newly funded “AI” centers won’t challenge the sponsors’ agenda. Meanwhile, the projects of dispossession and wealth extraction that elite universities and their patrons are invested in continue.

DIGILABOUR: How to challenge AI as a technology of whiteness?

KATZ: This is an important question, but not one for an individual to answer. It is up to committed collectives to deliberate and plan, and I have intentionally left things implicit and open-ended in the book. But I’ll offer a few thoughts.


In the book, I suggest that refusal and withdrawal – taking inspiration from Audra Simpson’s notion of a “productive refusal” from her brilliant book Mohawk Interruptus – have rich, concrete implications for future action, if we agree that the goal is to abolish the conditions that produced “AI” in the first place. I also suggest that refusal of AI is part of a broader project to dismantle white supremacy and imperialism, and so this move should be grounded in those broader efforts. This is also a way to resist the idea that AI is a new and exceptional development.  

There is a tremendous amount of work and thinking to be done here – both on ways to refuse and resist, but also on questions such as: What is “computation,” anyway? What do we hope to get from it? Should it exist at all? What forms can it take, if any, that are compatible with visions of a non-violent world in which we care for the earth?

There are several barriers to pursuing these serious conversations. For one, the world is burning, and understandably there are more urgent issues (even if the harm created by these computing initiatives and the institutions around them is greater than many might think). Moreover, anything involving computing is suffused with propaganda – as evidenced by nearly every term we encounter in this space: “artificial intelligence,” “cloud,” “big data,” “information economy,” etc. There is much to be historicized and unlearned. Lastly, the corporate centers that I and others have critiqued dominate the conversation and attempt to hijack emerging countercurrents. Co-optation is a real issue, and spaces for study are always contested.

Exit mobile version