Algorithmic Moderation of Content: The Need, Implications and Challenges

Gayatri Raman
14 min readDec 1, 2021

Introduction:

As of 6 December 2020, the Twitter account of noted journalist Salil Tripathi was taken down, apparently for tweeting a video, of a reading of a poem titled “My Mother’s Fault”. The poem, which was about the Babri Masjid demolition in 1992, and some other prominent instances of communal violence (Taskin, 2020) had existed in print and open circulation from much before. The outrage and denouncement of this act was swift, and came from all quarters, with noted writer Salman Rushdie calling it an “outrageous act of censorship against one of the most important advocates of free speech” (Taskin, 2020). Similar

sentiments were shared by Shashi Tharoor when he tweeted, asking if Twitter’s algorithms lacked a “human being applying common sense before taking such actions” (Taskin,2020), echoing the “anthropomorphizing” of non-human interventions quite literally (Johnson, 1988, p-303).

This is merely one instance in recent times, where the apparent arbitrariness of Twitter’s

actions against “hate speech” and problematic content have been called into question. As the

world grows more connected, social media’s relevance as a forum for dialogue, discussion and dissemination, or “curators of public discourse” (Gillespie, 2020, as cited in Binns, Veale, Kleek & Shadbolt, 2017) cannot be overstated. In such a reality, the idea that non- human actors and forms of technology will be navigating and policing the sensitive terrain of content moderation, is one that deserves closer scrutiny.

What are algorithms?

Humans and their interactions on and with the Internet, are increasingly exercises in data generation, exchange and analysis (Willson, 2017). Our thoughts, opinions and choices, about all things, banal and pertinent, from elections to climate change, are reflective of this reality.

Willson puts it rather succinctly, when she draws upon the idea of the naturalisation and

invisibilisation of the several kinds of algorithmic inputs “we engage with and that shape the form and flow of our individual and social lives in space and time” (Willson, 2017, p-139). Through Bourdieu’s construct of the habitus, she is able to illustrate how algorithms aid in making certain routines and processes fit so seamlessly into our lives, that we no longer note their presence. She characterises “The Internet as a series of systems within which many people navigate and, therefore, must devise ‘ways of operating or doing’ the every day” (Willson,2017, p-139). These ways of “operating and doing” are increasingly functioning “semi-autonomously” — there is decreasing dependence on human actors to navigate “the online practices and systems — of searching, communicating, and other practices”, i.e., they are “delegated” to code, softwares and algorithms (Willson, 2017, p-139).

An algorithm is delegated a task or process and the way it is instantiated and engaged with in turn impacts upon those things, people and processes that it interacts with — with varying consequences” (Willson, 2017, p-139). As the world becomes increasingly Internet-driven and Internet dependent, the quantity and range of the tasks delegated to these algorithms has

increased manifold (Willson, 2017). Algorithms today, therefore, ‘know’ a lot about us — they

track our choices on e-commerce platforms to recommend to us (Willson, 2017, p-142), they know what music we like, etc., i.e., they know with great accuracy, who we are when we are online. In addition, they keep growing in their power and efficiency at performing these tasks, as they are constantly ‘learning’ from our actions — they are designed to operate iteratively (Willson, 2017, p-142). This power can be leveraged to bring to fruition, “possibilities to shape, direct and reflect outcomes and behaviour on the basis of algorithmic sorting of large data sets gleaned from everyday activities alongside the ability to test or experiment with these and to be able to track and identify resultant changes”, which could be both “technical and social” (Willson, 2017, p-143).

Increasing everyday dependence on and engagement with and through the online, and extending out to our engagement with other objects (Internet of things, driverless cars or with robots, e.g.) render these relationships and the ‘algorithmization’ of everyday practices as commonplace and unremarkable and yet relatedly, worthy of closer critical attention (Willson, 2017, p-143).

It is therefore easy to construe algorithms as powerful beings operating in the Internet ether, invisible but all knowing — and humans as mere ‘puppets’ whose choices and interactions on cyberspace are actually being driven by shadowy algorithmic wizardry. However, this is not entirely true — algorithms operate in the way they do, largely because of the priorities of the human actors and organisations that designed them (Willson, 2017), or who Johnson refers to as the “enunciators” (Johnson, 1988).

The B-word: Bias

There emerges therefore ample scope for biases, that “could be beneficial or detrimental or both, similarly they could be intentional or unintentional” (Willson, 2017, p-145). It is a reality that the ways in which various algorithms function can demonstrate the existence of a

unique politics or “particular practices of power” (Willson, 2017, p-145) — that drive them, privileging a certain kind of behaviour or series of actions, which would then become self- propagating. Particularly with respect to social media, the scope for bias is immense and is visible in various dimensions and forms — Willson cites Bozdag’s work on Facebook and Twitter’s methods of deciding what posts or forms of interaction rank highly on their traction scales, as examples of this phenomenon (Bozdag, 2013, as cited in Willson, 2017, p-145). Willson also cites Friedman and Nussenbaum’s delineation of the kinds of bias that can be present in the workings of such platforms, where besides the individual and societal biases that are “pre — existing: from social institutions, practices and attitudes”, “decontextualised algorithms” also find mention (Friedman & Nussenbaum, 1996, as cited in Willson, 2017, p- 145). The danger heightens when algorithms assume the capabilities to not just “tell us what we are seeing, but what we, the user, should be seeing” (Willson, 2017, p-145).

The instance of Salil Tripathi’s account getting suspended, (Taskin,2020) is only the latest such instance. As the prominence of Twitter as site of dialogue and discussion continues to rise, there is also a proportionate rise in frequency of outcry and indignation over its algorithmic arbitrariness in moderating content or what it deems as problematic

behaviour. What follows is usually a sense of frustration or “injustice” (Gillespie, 2020, p-3), as was visible in the outcry on Salil Tripathi’s suspension, which stems from the realisation

that the “affective experience” (West, 2019, as cited in Gillespie, 2020, p-3) of the individual user is being entirely and unfairly overlooked. The opacity of these non-human actors, leads to them being viewed as either suspicious agents, of “naked corporate self-interest” (Gillespie, 2020, p-3) or as “megamachines who have little concern for their users as

individuals” (Hill, 2019, as cited in Gillespie, 2020, p-3).

No Carrots, Only Sticks: The move to algorithms

It would be useful to preface the discussion on algorithmic content moderation with an understanding of, more largely, why and how it is that non-human technological interventions came to be. In his 1988 essay on the mixing of humans and non-humans, Johnson contemplates on what the implications are of delegating to either humans or non-humans, and how these choices are made (Johnson, 1988), with respect to the functions of a simple, but extremely essential everyday object — the door and the door groom/hinge. There are some pertinent analogies between the case he makes, and that of the central question, on algorithmic content moderation, which are explored ahead.

The first major reason why there is a turn to technologies is therefore to “bring about

maximum effect with minimum effort” — actors (or “enunciators”) turn tasks that would require a steady supply of human effort or supervision into one that can be simply and reliably performed by the non-human actor — thereby “delegating” or “transforming” them (Johnson, 1988, p-299). This “substitution” of humans with non-humans is at the heart of these processes — whether it is for a door groom, or content moderation on social media using algorithms. Contrast this with employing a human solely for the task required — be it ensuring that a door is properly shut or that problematic content is not setting social media (and in

these times, the “real” world) ablaze. There would be several problems that emerge, which could be solely traced back to the human actor responsible — the door groom could fall sick, get bored and doze off, go on strike, etc., (Johnson, 1988) and the moderators may reveal or develop biases, of the many kinds elaborated on above. This apart, human actors carry costs — the problem of “scalability” arrives, as Jack Dorsey opined above. If for a moment we hypothesise that we shall set about rules and guidelines to enforce the behaviour desired, i.e, the “prescription” (Akrich, 1987 as cited in Johnson, 1988, p-301). So non-human actors are

proving to be reliable and economical. What else is tipping the scale, or their side of the table (Johnson, 1988, p-299), in their favour?

Another very illuminating idea that Johnson puts forth is that of “disciplining” humans — why should people need door grooms in the first place? With the passage of time, humans seem to be becoming unmoved in their responses to everyday requests of ‘decent’ behaviour — shut the door after you enter, don’t attack strangers for having opinions that don’t match your own, and so on.

This is when there emerge two distinct but actually intertwined circumstances — one, the “appeal to god” (Johnson, 1988, p-305), i.e., higher powers who would be more impressive in their command over human behaviours, and second, the need for “drugs” (Johnson, 1988,p-305) — ways and means of exerting more and more passive or active pressures on humans to behave in the ‘prescribed’ manner — from verbally informing and requesting each sociologist to keep the door closed (producing “local cultural conditions”, Johnson, 1988, p- 301), notes to keep the door closed, to hiring a door groom, to finally, moving away to a non- human actor, the door hinge.

There is however a need to note, that the move is not always from “softer to harder devices” or mechanisms (Johnson, 1988, p-305). There is evidence of the “extra-somatic” becoming “intra-somatic”, i.e., a large body of now well-imbibed, and performed skills and knowledge that may once not have existed or been so apparent (Johnson, 1988, p-305) — e.g., the author talks of how traffic control is enforced by traffic signals without very apparent abilities to penalise or punish. So, what is being constructed are “scenarios” or scripts where “inscribed readers” (Johnson, 2020, p-307) will perform in a certain anticipated, ‘prescribed’ manner — this is the body of what becomes the algorithm, built by the “semioticians” or “enunciators”, i.e., the programmers and technologists who “define actors, endow them with competencies, make them do things and evaluate the sanction of those actions” (Johnson, 2020, p-306).

From Johnson’s deliberations there are some themes that are recurrent and emerge very clearly. One, humans do not switch to non-humans in an instantaneous fashion; this is an evolution, which bears an imprint of the economic and logistical priorities of the enunciators who enable such shifts. Second, enunciators often operate within, and recreate fixed structures; there is a fixed problem at hand to solve, a fixed list of priorities, scenarios, actors, et all, who are a part of this process. Third, the problems or limitations of the sociotechnical solution arrived at may become apparent in ways that include the marginalisation of those who were not the primary actors in the scenarios devised, eg: The hydraulic door closer is an efficient solution to the problem of time-bound door closing, but can’t be readily used by the author’s nephew, or those who lack the required muscle strength (Johnson, 1988).

The Consequences

In his July 2020 paper on the questions of using AI for Content Moderation, Gillespie acknowledges the legitimate belief of the technologists today in the “promise of AI” -

that AI seems like the silver lining in ensuring clean, controlled and non-interfering (‘non-

judgemental’) moderation of content on social media forums. They seem best poised to deal with the “quantity, velocity and varieties of content” (Gillespie, 2020, p-1) and the complexities of violations incessantly occurring. These are messy situations, which are ever- evolving, and the levels of nuance required in dealing with such violations cannot be overstated, for a number of reasons; free speech and its surrounding legalities are only one facet, how do you successfully recreate respect for and the efficiency of, the “administrative machinery”, which was there, a visible policeman/woman, (Johnson, 1988, p-306), and is now an invisible non-human actor? In a reality where “the consequences of online harms now extend beyond the platform on which they occur”, (Gillespie, 2020, p-1) how do you ensure

that this “administrative machinery” functions without bias, without overreaching its powers,

without the constant need for “maintenance” (Johnson, 1988) or a further level of human supervision, i.e, with an efficiency of effort and input, as was desired and implemented in the case of the door hinge/groom?

At this stage, it would also be of use to draw upon the considerable contrast in the thoughts of technologists on non-human interventions today, and those outlined by Johson nearly three decades ago:

“Specialists of robotics have very much abandoned the pipe dream of total automation; they learned the hard way that many skills are better delegated to humans than to non- humans, whereas others may be moved away from incompetent humans.”

(Johnson, 1988, p-306)

“We want to be flexible on this (the number of people working on content moderation), because we want to make sure that we’re, number one, building algorithms instead of just hiring massive amounts of people, because we need to make sure that this is scalable, and there are no amount of people that can actually scale this. So this is why we’ve done so much work around proactive detection of abuse that humans can then review. We want to have a situation where algorithms are constantly scouring every single tweet and bringing the most interesting ones to the top so that humans can bring their judgment to whether we should take action or not ” (Gillespie, 2020)

What is it that has enabled such a granular paradigm shift, in the way technologists think about the role they play, and the mechanisms they must design to combat the many problems of content moderation? The answer to some extent is in the “mindset prevalent in Silicon Valley, which sees these problems as technological ones requiring technological solutions” (Geiger as cited in Gillespie, 2020, p-2). The answer is also, the presence of challenges that seem insurmountable at an efficient level if left solely to humans — the ideas of “scale”, which shall be elaborated on ahead.

Gillespie outlines very plainly, the self-fulfilling idea that “platforms have reached a scale where only AI solutions seem viable; AI solutions allow platforms to grow further” (Gillespie, 2020, p-2). He then discusses the distinction between ‘Scale’ and ‘Size’. These are now words which have specific, symbolic, meanings in these contexts — Rather than ‘scale’, what is really meant in these discussions is ‘Size’ — the size and volume of the data, the number of users, their interactions, the money and man/technological power needed to govern them, etc. ‘Scale’ on the other hand is a kind of “articulation” — “different components

attached so they are bound together, but operate as one” (Slack as cited in Gillespie, 2020, p- 2). ‘Scale’ concerns are about the abilities needed and processes performed to achieve a certain goal — how exactly do these moderation or policy teams get constituted, what goes into the formulation of guidelines, “enormous populations of users attached to flagging

mechanisms that produce tiny bits of data about many, many violations”

(Gillespie, 2020, p-2). To put it very simply ‘Scale’ is about processes, procedures and effects; ‘Size’ is about numbers, the visible, the quantifiable.

So, when ‘Scale’, or “scalability” is factored in, the reference is to the kinds of sociotechnical practices and arrangements that need to be put in place to bring about the granular level moderation that is being talked about (Gillespie, 2020, p-2) — and that is to a degree

intertwined with, and to a certain degree separate from, ‘Size’. Within this context we can locate both the emergence of AI as the haloed alternative, and the dispensing of other possible alternatives, as being guided by the politics of the larger capitalist framework within which these social media forums are operating, and of which they are a powerful part. As far as “plain” economics goes, it is the question of ‘Size’ and not ‘Scale’ that drive the push for AI. There are two reasons stated; one, the toxicity of their platforms is hurting their investors and their brand, and second it is uneconomical to employ millions of moderators (Gillespie, 2020).

Of the several reasons why, AI is a problematic solution, perhaps the most glaring, is therefore that it is not really as efficient or accurate as is being made out — AI is unable to achieve much more than simple pattern matching or automation and the Machine Learning techniques being relied on are unable to account for “context, subtlety, sarcasm, and subcultural meaning” (Duarte as cited in Gillespie 2020, p-3). “Even the tools designed to identify duplicates may be insensitive to the use of the same content in a different context, like terrorist propaganda reposted in a journalistic context” (Llanso as cited in Gillespie, 2020, p-3).

There are also larger, perhaps more abstract, but equally pertinent reasons why AI may not be the silver lining it is made out to be. Firstly, it is not the place of a machine, an individual moderator, a team, an organisation, or even a bunch of organisations to decide what content is problematic and what is not. As Gillespie admits, labels like hate speech are not perfect fits (Gillespie, 2020) — in fact they are mostly contentious, and in our polarised hyperbolic times, rabidly so. It is for a society to decide what it deems acceptable and unacceptable, in what forums, forms, and contexts, and at what points of time. Transferring these processes to non- human actors en-masse, or looking at such interventions as single step victories is myopic.

When the primary motivations are of bringing about a level playing field, or at least seeming so, the heavy emphasis on statistical techniques and accuracies can mean that the burden of error is on of error on “underserved, disenfranchised, and minority groups” (Gillespie, 2020, p-3). “Who these tools over-identify, or fail to protect, is rarely random” (Buolamwini and Gebru, as cited in Gillespie, 2020, p-3). This is not to state that algorithms or non-human interventions are to be avoided entirely; they can be immensely useful, in the right measure and form they can bring about a much desired “consistency” (Gillespie,2020, p-3), and with the scope for malleability to account for shifting definitions of “public good” (Calhoun as cited in Gillespie, 2020, p-4). There is therefore a need to discerning and discriminatory

There is therefore a need to be realistic and very discerning in the deployment and assessment of non-human interventions such as AI in regulating such a vastly human experience as communication on social media. There is scope for relying on tools to do predictable tasks such as identifying duplicates, or dealing with explicitly problematic content (child pornography, rape videos etc.) without the need for a moderator, thereby not risking his/her mental health. However, this must be peppered with the understanding that “Machine Learning tools should be designed to support human teams rather than supplant them” (Gillespie, 2020, p-5).

References:

Binns, R., Veale, M., Kleek, M. V., & Shadbolt, N. (2017). Like trainer, like bot? Inheritance of bias in algorithmic content moderation. Arxiv.org, 1. doi:10.31219/osf.io/97u3q

Calhoun C (1998) The public good as a social and cultural project. In: Powell W and Clemens E (eds) Private Action and the Public Good. New Haven: Yale University Press, pp.20–35.

Hill S (2019) Empire and the megamachine: Comparing two controversies over social media content.

Internet Policy Review 8(1): 1–18.

Geiger SR (2016) Bot-based collective blocklists in twitter:The counterpublic moderation of harassment in a networked public space. Information, Communication & Society 19(6): 787– 803.

Gillespie, T. (n.d.). Content moderation, AI, and the question of scale. Big Data & Society, July- December, 1–5. doi:DOI: 10.1177/2053951720943234

Hill S (2019) Empire and the megamachine: Comparing two controversies over social media content. Internet Policy Review 8(1): 1–18.

Johnson, J. (1988). Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer.

Social Problems, 35(3), 298–310. doi: https://doi.org/10.2307/800624

Llanso E (2019) Platforms want centralized censorship. That

should scare you. Wired, 18 April. Available at: https:// www.wired.com/story/tumblr-porn-ai- adult-content/

Taskin, B. (2020, December 6). Outrageous, says Salman Rushdie as Twitter suspends journalist Salil Tripathi’s account. The Print. Retrieved December 16, 2020, from https://theprint.in/india/outrageous-says-salman-rushdie-as-twitter-suspends-journalist-salil- tripathis-account/561811/

Willson, M. (2017). Algorithms (and the) every day. INFORMATION, COMMUNICATION & SOCIETY, 20(1), 137–150. http://dx.doi.org/10.1080/1369118X.2016.1200645

West SM (2018) Censored, suspended, shadow banned: User interpretations of content moderation on social media platforms. New Media & Society 20(11): 4366–4383.

--

--

Gayatri Raman

I am a research inclined student at IIIT -B currently seeking opportunities in Accessibility and HCI research within the larger ICT4D discourse