When I first enrolled in grad school to read Science and Technology Studies (STS), I learned about how machine intelligence picked up data loaded and generated by humans which are filled with our stereotypes and decisions internalised by our own ignorance (whether we are racists, xenophobic, homophobic etc. etc.) — essentially our own human imprints — of what scholars might call ‘biases‘. And over the years, this is what I have referred to this algorithmic lack of innocence in conversations and writings, ‘biases’ everywhere. But after reading this excellent article by Kinjal Dave of Data & Society on our inaccurate use of ‘biases’ as term starting from the definition of Walter Lippman‘s, thereby disregarding the systemic harm caused by these algorithmic systems, that will change:
Because both “stereotype” and “bias” are theories of individual perception, our discussions do not adequately prioritise naming and locating the systemic harms of the technologies we build. When we stop overusing the word “bias,” we can begin to use language that has been designed to theorise at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.
Given this history, when we say “an algorithm is biased,” we, in some ways, are treating an algorithm as if it were a flawed individual, rather than an institutional force. In the progression from “stereotype” to “bias,” we have conveniently lost the negative connotation of “stereotype” from Lippmann’s original formulation. We have retained the concept of an unescapable mentalising process for individual sensemaking, particularly in the face of uncertainty or fear—yet algorithms operate at the level of institutions. Algorithms are deployed through the technologies we use in our schools, businesses, and governments, impacting our social, political, and economic systems. By using the language of bias, we may end up overly focusing on the individual intents of technologists involved, rather than the structural power of the institutions they belong to.
What would happen if we start listening to and citing the scholars of whose work are more legible in addressing the systemic harms our systems could cause the marginalised?
Perhaps because we insist on using bias as the starting point for our critical technology conversations, we have been slow to take up Safiya Noble’s identification of “oppression” as the impact of technologies which stereotype. What would happen if we cited Kwame Ture and Charles V. Hamilton as faithfully as we do Walter Lippmann in the development of our theoretical frames?
Related (or perhaps not):
- “Whether Winner’s (1986) interest in the politics of technology, Gandy’s (1993) notion of the panoptic sort (1993), Gilliom’s (2001) interests in computerised overseers of the poor, or Monahan’s (2008) concept of ‘marginalising surveillance’ (p. 220), technology extends power and can be designed to systematically disadvantage marginalised groups.” Decentering technology in discourse on discrimination.
- Current earworm: Hal Ard Lamin? (Who Owns This Earth?) from Syrian musician Yousef Kekhia — who fled the Syrian war in 2013 — on the manufactured concept of borders and separation between countries.