<strong>Toxic disinformation in informative disorder</strong>

Camilla Quesada Tavares

People who use social media frequently must have encountered hostile remarks or been the subject of such content. Social media platforms provide an ideal environment for the transmission of these violent discourses, which adds to the informative disorder that permeates these settings. This becomes even more of a problem when we consider political agents since this kind of conduct can: 1) shift the conversation away from a pertinent subject; and 2) deter agents in this sector, particularly women, from staying in politics.

So, in this text I briefly discuss the role of discursive violence directed at these public figures on digital platforms, adding a new element: the toxicity of these speeches.

Context

Disinformation is not a new phenomenon because lies and irrelevant information have always existed in society, particularly in political and electoral contexts. Studies generally hold that the novelty of these communications lies in their distribution and reach, producing narrative conflict and echo chambers that frequently help them spread far more quickly than those that provide accurate information.

We understand disinformation when false or altered information is purposefully spread with the intent of influencing public opinion. From there, we assume that there is a second level of disinformation that is situated within the context of “informative disorder” and is characterized by messages whose main objective is to directly attack someone, based on personal issues or private life, but which can be used to divert the focus of attention. We’ll refer to this level of toxic disinformation.

Most of the researchers about informative disorder focus on the process on Twitter, Facebook or Youtube, to cite just a few cases. However, we notice an increase in attacks in these spaces targeting political figures. So, they assist in deflecting attention away from the public conversation and use factors from the personal/private sphere to legitimate the discourse against certain agents, also contributing to the disinformation process, as we will argue below.

Research questions and the argument

Considering the previews, I’ve been thinking about discursive violence I’ve seen on social media platforms:

i)           What kind of aggressive discourse is drove to political actors on Brazilian social media platforms?

ii)          What are the characteristics of these speeches and the likelihood for contamination of the political conversation?

iii)        Are there differences in the distribution of content that use slurs and those that sound hostile without necessarily adopting explicitly derogatory terms?

iv)        In social networks, are political figures targeted equally, or are there disparities based on factors like gender and political affiliation?

Different theoretical approaches support the questions above, but we propose to investigate them from the lens of toxic speech. Unlike other studies that resort to the theoretical concept of hate speech or incivility, we defend the adoption of the toxicity concept to define these speeches, because we believe that not only cursed, derogatory terms and nicknames constitute modes of domination in discursive practice.

We consider that hate speech leans towards the concept of “words that hurt”, while toxic speech brings together a special category of even more powerful slurs, which are part of the systems of oppression in which they are inserted. A toxic message does not need to present swearing, but it can present a set of words that, due to the context, lead to a type of symbolic violence – such as the use of humor, sarcasm, stereotypes, among others. Its toxicity creates and reinforces harm to both targets and communities and interferes in public opinion. Because of this, we consider this violence to be toxic disinformation.

From this angle, discourse is understood as a social practice that highlights power dynamics. Toxic discourse can still exist even in the absence of outright insults or swearing when particular words and terms are combined with the political, social, and historical context. For example, prejudice can be manifested in many ways, not just calling someone names. The tone and context of the insertion of the phrase “Hey, here come these people from the Northeast who don’t know how to vote” show that it has an explicit target and is surrounded by prejudice against people from the region, without necessarily using derogatory language or any type of explicit offense.

Toxicity would be a poison that contaminates the speech act, highlighting the mechanisms by which speech acts and speech practices can cause harm. It is worth noting that this is not about dismissing hate speech, but understanding it as a category within toxicity, which would be a broader and more diffuse concept, although I recognize that a theoretical effort is needed to conceptualize and differentiate one from the other. The turning point here is the relationship with discursive practice, which reveals an implicit meaning, and the social and political context. As a result, language becomes an important element for toxic disinformation, especially when related to the disinformation industry more generally. This is what we will focus on next works, in addition to offering empirical data on toxic disinformation in the Brazilian electoral context of 2022.

Camilla Quesada Tavares is a professor from the Graduate Program of Communication at Federal University of Maranhão (UFMA) and coordinator of the research group Communication, Politics and Society (COPS).

This op-ed is part of the project “Global Democracy Frontliners: Transnational Research Coalition for Tech Accountability and Democratic Innovations Centering Communities in the Margins”, coordinated by Jonathan Corpus Ong with support from Luminate.

Exit mobile version