Marxism and Work in the AI Industry: Interview with James Steinhoff:

James Steinhoff is a postdoctoral fellow at the University of Toronto Mississauga. He is interested in applying (and updating) the Labour Process Theory framework to analyze the production of AI and its automation with technologies such as automated machine learning (AutoML).

He is co-author of Inhuman Power: Artificial Intelligence and the Future of Capitalism (2019, with Nick Dyer-Witheford e Atle Mikkola Kjøsen) and Automation and Autonomy: Labour, Capital and Machines in the Artificial Intelligence Industry (2021). Steinhoff had already talked about his new book on DigiLabour Youtube Channel.

This new book provides a multifaceted Marxist analysis (history, political economy, labour process) of work in the AI Industry, and argues that Marxist theory is essential for understanding the contemporary industrialization of the form of artificial intelligence (AI) called machine learning.

Read the interview with James Steinhoff:

DIGILABOUR: In recent years, many books have been published on AI in a critical way (for instance, Yarden KatzKate CrawfordMary Gray). How does a Marxist understanding of AI differ from other perspectives?

JAMES STEINHOFF: The recent wave of critical literature on AI is welcome insofar as these books take a historical perspective on AI, are attentive to the material conditions of AI and implicate AI within various networks of power. However, what I think they lack, and what a Marxist perspective provides, is a systemic understanding of capital. Approached systemically, capital itself can be seen as a sort of AI, an algorithm with the narrow goal of valorizing value, which demands the subjection of technology, labour and capitalists themselves, to particular functions in service of that end. Manifested through the law of value, via capitalist competition, this is the shunned determinism which lies at the core of Marx’s theory. Kate Crawford’s Atlas of AI, for instance, lacks a systemic theory of capital and talks about how AI is implicated in environmental destruction, exploitation of labour and forms of racialized discrimination, to take a few issues, yet it cannot offer an explanation for why AI is implicated in all these things. Crawford correctly states that the “core issue is the deep entanglement of technology, capital, and power, of which AI is the latest manifestation” (p218). But what is this power? A Marxist account understands this power as the form of value with which almost all social relations in the world are structured in accord. My book aims to impart this perspective to the study of AI and its production.

DIGILABOUR: You state that “AI work presents us with yet another example of the fragmentation, deskilling and automation of labour”. What’s new in terms of fragmentation and deskilling?

STEINHOFF: In the context of the AI industry, there are at least two new things. One is that the labour of data scientists and machine learning engineers is being fragmented, deskilled and automated. This might come as a surprise if you listen to industry discourse describing data scientist as the ‘sexiest’ job of the 21st century and more generally, software production, as the future of work. It might also come as a surprise if you have read the labour process studies of software production which describe it as irreducibly communicative, ad hoc and not amenable to Taylorist management, or if you follow the immaterial labour theorists who discern in software-mediated work a nascent autonomy of labour from capital. In fact, the data science work that produces AI is already being dissected into its components, with the routine bits being given to less valuable data analysts and in some cases, automated. Even this relatively new, highly skilled type of labour is not immune to the corrosive effects of the law of value.

The second thing, and a really interesting one to me, is that data science labour is being fragmented, deskilled and automated with machine learning tools produced by data science labour (automated machine learning or AutoML). We often hear of AI as an automation technology threatening this or that kind (or all kinds) of work. But in the context of data science work, the workers are automating their own work. To a certain extent, the automation of tasks is standard procedure in software work. However, automated software production, in a more complete sense of producing fully functioning programs, has historically had very little success. The pattern recognition capacities of machine learning might change this. Marx talked about how large-scale industry could not become the dominant form of production until it industrialized the manufacture of machine parts, achieving the production “of machines by means of machines” (Volume 1, p506). With AutoML we may be seeing a new instance of this phenomenon.

DIGILABOUR: Your critique is mainly directed on post-operaismo and immaterial work. In recent years, there has been a return of operaismo to understanding the platform labor. What is your view on these “new (digital) operaism”?

STEINHOFF: You’re referring to the work of people associated with Notes From Below? I think it’s really great and I’m very happy with the return to focusing on workers that they advocate. Their ongoing class composition project is extremely important and I would urge anyone who reads this to check it out. A lot of recent research in my field (media studies) has focused on users of apps and platforms, which is important, but I think attention also needs to be paid to the workers who create and maintain those venues. Operaismo and post-operaismo are really very different. My critique of post-operaismo centrally involves its replacement of the working class with the digitally networked “multitude” as the revolutionary subject, a theoretical move which I believe the new operaismo also rejects. My only concern about this approach is that it might, in focusing on the composition and capacities of labour, neglect the study of the technological capacities of capital.

DIGILABOUR: What are the main contributions of New Reading of Marx to understanding AI work?

STEINHOFF: NRM and value-form Marxism generally are important to understanding AI work insofar as they attribute no necessary revolutionary significance to deterritorialized forms of high tech work. Contrary to post-operaismo, which posits a historic rupture in labour-capital relations caused by the proliferation of information technologies, a value-form perspective asks about the ways in which the transformations of value can occur in deterritorialized contexts. Emphasizing the malleability of the value-form is important for me because it reveals the continuity of AI work with other types of work, rather than sensationalizing its differences.

DIGILABOUR: What is synthetic automation?

STEINHOFF: I use the term synthetic automation to describe the automation of labour processes without a preceding codification based on the study of workers at work. Capital does not know how to work, so it must capture this knowledge from labour. Consider Taylorism: the scientific manager studies the worker at work and makes a detailed account of the labour process, breaking it up into discrete steps. Once the dissected labour process is made visible to management it can be optimized for management’s objectives. The establishment of such a ‘one best way’ is codification. The components of a codified labour process can be redistributed amongst less expensive workers, and perhaps automated. You can’t build a machine or write a program to automate a labour process that has not been dissected like this. Or at least, you couldn’t. Machine learning presents, in a nascent form, the possibility of automating without a preceding codification. This is visible in how components of AI work, much of which has not been codified, are being automated. My interviewees described the production of machine learning as an art, drawing on intuition, experience and trial and error – one engineer described it to me as a “dark art”. Yet, the application of machine learning to the production of machine learning (AutoML as discussed above) is enabling the automation of these uncodified labour processes through brute force iterative experimentation. Since, for instance, there is no ‘one best’ neural network architecture, building one for an unfamiliar application is said to require intuition, experience and good old trial and error. However, given the proper data, AutoML can be applied to generate all kinds of candidate architectures and evaluate them at inhuman speeds, skipping over the codification of the labour process via sheer brute force. I describe this approach to automation as synthetic because it assembles an automation process from some data other than the observation of living labour. Here I am inspired by Alfred Sohn-Rethel’s discussion, in Intellectual and Manual Labour, of the synthetic timing of work, which refers to a conception of time not captured from labour, but produced by capital from abstract specifications constructed first of all to advance valorization.

DIGILABOUR: How do you relate machine learning and fixed capital?

STEINHOFF: I think the most significant use of machine learning today is its employment as fixed capital, i.e. when it is employed in a production process. Even if “AI adoption” as the business literature refers to it, is actually quite low so far, and even if today many AI business endeavours result in abject and sometimes hilarious failure, the widespread integration of data-driven analytic and predictive capacities into all kinds of technologies presents a scenario worth considering. The simple question of what can be automated now that couldn’t be before remains an interesting one. It’s also interesting to think about what the knock on effects of widespread machine learning fixed capital might be. No doubt, ever more intensive surveillance and data collection, but as people and governments begin to resist such things, capital will seek data by other avenues, such as the generation of synthetic data, or the use of virtual environments like OpenAI’s Gym which allows the training of reinforcement algorithms (which learn by ‘doing’) without the costs and difficulties of dealing with the real world. One might argue that since machine learning relies on data primarily generated by the actions of humans, its incorporation into capital as fixed capital presents a potential point of weakness. I’m more interested in discerning how capital is attempting to mitigate that weakness by harnessing the recursive capacities of machine learning to increase its autonomy from labour.

Exit mobile version