Taina Bucher is an associate professor in screen cultures at the Department of Media and Communication, Norway, and author of the book IF… THEN: Algorithmic power and politics. She is one of the leading authors on platform and algorithm studies.

In this interview, Bucher talks about algorithmic imaginary and reception/audience studies, kairologic of algorithmic media, ontological politics of algorithms and being ambivalent in the context of scholarly discourses around digital technologies. She also anticipates details about her next book on Facebook.

DIGILABOUR: How to go further on algorithmic imaginary research? Is it correct to say that researching how people produce meaning about algorithms is a form of reception and audience studies in the way of cultural studies?

TAINA BUCHER: Yes, you could absolutely say so. How people make sense of algorithms is essentially about interpretative work and processes of meaning-making. Cultural studies and media reception analysis, especially as developed by Stuart Hall, played great emphasis on the relations between producers and consumers in the reception of media texts. According to the semiotic model of encoding/decoding developed by Hall, audiences are not passive receivers but rely on various interpretative repertoires. Similarly, thinking about the algorithmic imaginary necessitates a critical analysis of production on par with consumption. A fundamental difference today is that the relations of production-consumption are much more cyclical and multidimensional than what traditional audience and reception studies were concerned with. It is not just that platforms encode meaning onto algorithms that users then decode or make sense of (see Decoding Algorithms, de Stine Lomborg e Patrick Heiberg Kapsch). Because user action gets fed back into the encoding process, consumption is always already part of the production and vice versa. This is also my point with positing the productive power of what I call the algorithmic imaginary. Ways of thinking about what algorithms are, what they should be and how they function go beyond the interpretative realm. How people think about algorithms and sociotechnical systems, affects how they are using these systems and how they are oriented towards them. It does not matter as much whether these imaginaries are true or not, because when enacted they become part of the truth, if by this we mean the ways systems work or how these imaginaries in turn affect the business models and the workings of the companies behind the algorithms.

 

DIGILABOUR: How has your research understood people’s perceptions of privacy on digital platforms?

BUCHER: My research has not looked at privacy specifically. That said, concerns over privacy keeps popping up in many aspects of my research. For example, even if I did not specifically ask about privacy when interviewing people about their perceptions of algorithms, privacy was often addressed as one of the core concerns people had when talking about algorithmic processes. Concerns over privacy were particularly prevalent when people talked about “uncanny” or “creepy” algorithmic outcomes, as in “how could the algorithm have known”? I’ve also been part of a research project that’s concerned with the relations and interdependence between personal information and privacy, arguing that traditional conception of informational privacy needs to be reconceptualized in an age when the personal in “personal information” is intimately tied to networks and aggregated data. That is, networked mechanisms certainly challenge how personal information is understood. Take personalized music recommendations. At what point was it my listening behavior or personal preferences that kicked in and influenced the recommendation, as opposed to other people’s similar behaviors and aggregated data points? In a networked algorithmic world, the question remains as to where the boundaries between I and the other is to be drawn, and to what extent such boundaries are meaningful and to whom. Due to other commitments and moving countries, however, I am no longer an active part of this particular research project, so I am probably as eager as you to find out what my former colleagues at the University of Copenhagen come up with.

 

DIGILABOUR: What is kairologic of algorithmic media?

BUCHER: It refers to the specific temporal regime of algorithmic media as characterized by the logic of the right-time. I develop this notion in a recent article of mine in New Media & Society, in which I argue that the notion of right time stressed by digital platforms such as Facebook is reflective of a new temporal regime produced by an increasingly algorithmic media landscape. During the 1990s and early 2000s, the notion of real-time emerged as a particularly prolific term to talk about the acceleration of everyday life and the breaking down of traditional time-space boundaries. It arguably became the most recognized term for talking about the temporality of the web. However, media technologies that fundamentally rely on algorithms to sort, filter, rank, and curate content are not merely operating in or producing distinct forms of realtimeness but hinge on a set of temporal relations that work to produce a particular temporal landscape characterized by a time that is right (e.g. news feeds as presenting «the right content, to the right people at the right time»).[1] To better understand this increasing focus on the right time, I have found it useful to turn to the classic Greek notion of kairos – understood as an opportune moment to say or do something. My interest in temporality and the notion of kairos started already 13 years ago when I wrote my master’s thesis on the topic. It is only recently, however, that I have started to publish on these topics. Anyways, to answer your question, the ways in which algorithmic media hinge on kairos does not map seamlessly onto existing notions of kairos. In order to differentiate the specific rhetorical and biblical understanding of kairos from the ‘right-time’ inscribed into the current media landscape, I refer to it as the ‘kairologic’ of algorithms. This is to say that algorithmic media systems carry with them the logic of kairos, where instantaneous mediation is no longer the end goal, but rather the personalized timing of mediation. The article works to develop what this logic of kairos means and how it can be conceptualized in the context of Facebook especially.

 

DIGILABOUR: What are ontological politics of algorithms?

BUCHER: It is the world-making capacity of algorithms. I borrow the term “ontological politics” from the writings of STS scholar Annemarie Mol, who use it to convey how realities are never given but shaped and emerge through different interactions.[2] It’s a politics that has to do with orientations, how issues are problematized and gain currency or not, how bodies get shaped and oriented in certain ways rather than others, how some things become more or less probable. This is to say that the power and politics of algorithms may not necessarily be located in the algorithm (although it might), but that the most powerful dimensions of algorithms have to do with the ways in which these systems govern the possible field of action of others, and how these possibilities are made more or less available or unavailable to certain actors in specific settings.

 

DIGILABOUR: What are the challenges of being ambivalent in communication and media research, specifically in platform and algorithm studies?

BUCHER: I guess it’s the same as occupying an ambivalent position in general. By ambivalence I do not mean indecisiveness or indifference but that in-between stage of recognizing and addressing the both/and (as opposed to either/or).[3] In a commentary article in Social Media + Society, I try to grapple with the challenges of this position, especially in the context of scholarly discourses around digital technology. Going back to my master’s degree again (seems those were formative years), at that time in 2006/2007, quite a few books had come out on the differential effects of the Internet. It seemed very either/or. On the one hand, there were all these seemingly optimistic accounts of the participatory potentials of the web and the democratizing effects of networked media, blogs and social media platforms (when we still believed in their positive potentials). On the other hand, there were a lot of books coming out about the damaging effects of the Internet. Children would be hurt, attention lost and our ability to read books forever lost. As social media research developed, similar polemics were at play. Good vs. bad, positive vs. negative, optimistic vs. pessimistic. This is nothing unique to media research, of course. Polemics existed long before I wrote my master’s thesis and will continue to exist. That is not the point. Having researched and written about algorithms for nearly a decade now, my fascination with ambivalence as a form of critique stems in part from what I take to be a certain demand for a definitive account in accounting for these topics. The more journalists were calling me for definitive answers on whether algorithms are bad or good (in reality more often a question of “how bad on a scale of bad are these algorithms”?), the more I started questioning such critiques-on-demand. I have always been fascinated by the ease with which critique (disguised as debunking myths and taking a negative stance) can be voiced and the disproportionate difficulty of taking a seemingly nuanced or in-between position. What I argue in that piece is that grappling with an ambivalent position often takes more, not less work. Far from being agreeable or a cop-out, the ambivalent position means having to negotiate an ongoing tension without necessarily finding resolution. The challenge, as I see it, is to ask what a critical interstitial position might possibly entail. That is, if the usual scripts of “good” and “bad” aren’t readily available, what other stories could we tell? What stories might have been overlooked or seen as unimportant because they lacked a hero or a villain? In other words, what the ambivalent position entails is to engage in media critique and storytelling practices that does not automatically restore to protagonists and antagonists and other usual suspects. Instead of filling the gaps (not all gaps need filling!), the ambivalent position looks at what gets lost in the cracks.

 

DIGILABOUR: Can you tell us something about your forthcoming book on Facebook?

BUCHER: Sure. The book is entitled Facebook as it is part of the Digital Media & Society book series at Polity Press. This series entails books on specific platforms and topics, such as YouTube, Twitter and Instagram. Somewhat surprisingly, there wasn’t yet a book on Facebook so the editors approached me in 2018 and asked whether I would be interested in writing it. At first I was reluctant, because I didn’t see myself writing about a platform that I felt so done with myself. Wasn’t Facebook a bit passé, something that had its heyday a decade ago? The more I thought about it, I realized that this was the most misunderstood aspect of Facebook, the idea that nobody cares or uses the platform any more. While it might be true for a small fraction of first adopters and perhaps for a younger generation to a certain degree, Facebook still matters a lot. Another common misconception, which was also part of the reason for taking on this project in the end, is to treat Facebook as just another social media platform. While it has become somewhat common place to suggest that Facebook is no longer a social media platform but an infrastructure, this is only part of the story. What I argue in the book, is that we have become so accustomed to think of Facebook as social media that it keeps us from developing a language to talk about it on its own terms. Facebook is not just another social media platform, a mobile app, a social network site or an infrastructure. Facebook is Facebook. This becoming larger than what the categories and words normally used to describe it are capable of conveying does not happen very often, but when it does (perhaps comparable to the Internet), we need to investigate what this becoming conceptual means. The book argues that Facebook has sort of become a concept of its own. Just like other abstract concepts, such as love, the dictionary comes short of providing a definition of Facebook. So this is what I explore in the book, what this becoming conceptual on the part of Facebook means and to start a conversation about Facebook that doesn’t necessarily immediately revert to the language of social media or other common tropes.

[1] For more on the idea of realtimeness, see Weltevrede, E., Helmond, A., & Gerlitz, C. (2014). The politics of real-time: A device perspective on social media platforms and search engines. Theory, Culture & Society, 31(6), 125-150.

[2] See Mol, A. (2002). The body multiple: Ontology in medical practice. Duke University Press.

[3] This is similar to what Whitney Phillips and Ryan Milner argue in their excellent book (2018) on The ambivalent internet: Mischief, oddity, and antagonism online. Cambridge, UK: Polity Press.