Three categories of information disorder. Credit: Scientific American
I was just thinking the other day as I delve more and more on the perils of technology and media in today’s challenging political and social scape, whatever that could be explained as, is that we should be focusing on the deeper systemic causes, rather than putting all the blame on the big tech companies. Claire Wardle’s article at Scientific American on misinformation echoes my thought:
During the past three years the discussion around the causes of our polluted information ecosystem has focused almost entirely on actions taken (or not taken) by the technology companies. But this fixation is too simplistic. A complex web of societal shifts is making people more susceptible to misinformation and conspiracy. Trust in institutions is falling because of political and economic upheaval, most notably through ever-widening income inequality. The effects of climate change are becoming more pronounced. Global migration trends spark concern that communities will change irrevocably. The rise of automation makes people fear for their jobs and their privacy.
Although they are still partially to blame:
… what social scientists and propagandists have long known: that humans are wired to respond to emotional triggers and share misinformation if it reinforces existing beliefs and prejudices. Instead designers of the social platforms fervently believed that connection would drive tolerance and counteract hate. They failed to see how technology would not change who we are fundamentally—it could only map onto existing human characteristics.
She then goes through seven types of “information disorder” — namely satire/parody, misleading content, imposter content, fabricated content, false connection, false context, and manipulated content — and also highlights that a lot of times the media makes it worse by sticking to their traditional model of reporting by amplifying misleading and triggering content.
Research has found that traditionally reporting on misleading content can potentially cause more harm. Our brains are wired to rely on heuristics, or mental shortcuts, to help us judge credibility. As a result, repetition and familiarity are two of the most effective mechanisms for ingraining misleading narratives, even when viewers have received contextual information explaining why they should know a narrative is not true.
On how when narrative is strategically and successfully reinforced, even a simple meme can mutate and deploy as a strong weapon:
When the Facebook archive of Russian-generated memes was released, some of the commentary at the time centered on the lack of sophistication of the memes and their impact. But research has shown that when people are fearful, oversimplified narratives, conspiratorial explanation, and messages that demonise others become far more effective. These memes did just enough to drive people to click the share button.
And so it turns out — ahead of next year’s US presidential election and the proliferation of AI-generated fake videos, or deepfakes, which could be used to influence the voters — the three biggest tech companies, Facebook, Google, and Twitter aren’t exactly prepared on how to deal with them. There are however, some efforts arising in order to spot such misinformation, such as “irregular blinking is one telltale sign a video has been messed with, for example. But detection is something of an arms race, because an AI algorithm can usually be trained to address a given flaw.” Several digital forensic experts are working to outline more foolproof mechanisms for detection, and it seems Google was ready to provide funding for this project.