Inoculating a society against disinformation

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Fighting rumour with humour

The battle for hearts and minds rages on, and I’m reminded of a passage from a previous issue on the golden age of propaganda:

Fears over fake news are misplaced, and this focus on falsehoods distracts us from the larger picture, which is the evolution of propaganda. Rather than lamenting the decline of obsolete news corporations, we should instead be talking about democracy, disinformation, and digital media as scapegoat.

Lies are not new to politics, and politicians are not themselves producing any novel propaganda. What is new is the ease and ability to enter the political arena. Political parties no longer exercise a monopoly on participation, and legacy media no longer act as gatekeepers, deciding who shall or shall not have a voice in the political process.

We’re living in a golden age precisely because anyone can be part of it. The ability to create quality propaganda has never been as widespread, and the ability to reach a large audience has never been greater.

The accessibility of our current media environment is encouraging, although I’m now wondering if that is changing.

On the one hand this accessibility is essential to ensuring people have access to economic opportunity, learning, and knowledge. However on the other hand this accessibility inevitably leads to increased pollution in the public sphere:

The current trend that has me concerned is the increase in automatic content moderation and the attempt to fight disinformation with censorship and suppression.

Removing content is a slippery slope as it starts with obvious and easy choices but quickly slides into more contentious and subjective contexts like nudity, profanity, and political criticism.

Similarly content is not created in isolation, but can be tied to human subjects. The danger of using disinformation or “fake news” as an accusation is that it also potentially dehumanizes the source of the information.

For example “bots” are often blamed for a wide range of information crimes, which while accurate in some cases, also creates the template that any source of disinformation is a bot the same way that any source of drama or criticism is a troll.

Arguing that people spreading propaganda are bots echos back to historical attempts to dehumanize political opponents. The mindless followers of some cause we dislike.

Although that’s not to say that automation can’t be weaponized or used in ways that should not be permitted. However it is to say that the response to this changing ecosystem has to be found in the context of diverse and dynamic environments.

Building public awareness and “inoculating” the 23-million-strong population to fake news from China has been one of the chief goals of Audrey Tang, Taiwan’s ground-breaking digital minister.

Sitting just 80 miles from China, Taiwan has been on the frontlines not only of the coronavirus pandemic but an “infodemic” of online disinformation. But the bombardment of Chinese state-sponsored influence has also made it a world leader in identifying and tackling disinformation.

“When the majority of the population have this exposure and this inoculation, this builds nerd immunity,” Ms Tang, one of the world’s top open source software developers, told the Sunday Telegraph in an interview last week.

“It’s the Taiwan model. Just like we fight the coronavirus with no lockdown, we fight the infodemic with no takedown,” said the former hacker who, in 2016, made history as Taiwan’s youngest ever, and first transgender, minister at 35.

The effectiveness of their approach stems from their acknowledgement that the digital public sphere is dynamic, and requires rapid response:

In February, the worlds of coronavirus and disinformation collided when false claims circulated that the material used to make face masks was the same found in toilet paper, sparking a round of panic buying.

The government acted swiftly to contain the claims by circulating a humourous meme of Taiwan’s premier shaking his bottom, alongside an explanation of the different source materials for toilet paper and masks.

“Our premier wiggling his buttocks a little bit … says that the rumour that medical masks ramping up will hurt the tissue paper production is not true,” said a laughing Ms Tang.

“People, once they laugh about it, cannot get outraged when they see this conspiracy theory,” she said. “This serves as a memetic vaccine, while this will have an R value of under one, in which case it will stop the spread. Indeed, the panic-buying stopped within a couple of days.”

The incident was a classic example of a counter-disinformation strategy described by Ms Tang as fighting “rumour with humour.”

Until recently the North American equivalent to this were late night comedy shows. However the pandemic has taken a lot of the wind out of their sails as they struggle with remote production and limited access to guests. They were also only reaching a portion of the public, arguably making propaganda and polarizing media more effective.

It’s actually quite alarming how easy people can be alarmed by the slightest bit of disinformation.

But on the local level, the source of the false information has usually been more subtle, and shows the complexity of stunting misinformation online. The bad information often first appears in a Twitter or Facebook post, or a YouTube video there. It is then shared on online spaces like local Facebook groups, the neighborhood social networking app Nextdoor and community texting networks. These posts can fall under the radar of the tech companies and online fact checkers.

“The dynamic is tricky because many times these local groups don’t have much prior awareness of the body of conspiratorial content surrounding some of these topics,” said Renée DiResta, a disinformation researcher at the Stanford Internet Observatory. “The first thing they see is a trusted fellow community member giving them a warning.”

The four instances detailed in the article above are interesting, if only because the origins of each rumour came from relatively credible sources. While those sources did not or could not reveal their source, it shows how easily this kind of information can spread.

The rumor led dozens of people to reach out to the local police that Sunday, according to Sam Clemens, the public information officer at the Sioux Falls Police Department.

“But on the day of the protests, we didn’t have any evidence of any buses coming from out of town carrying people,” Mr. Clemens said. The vast majority of protesters were local residents, he said.

The Greater Sioux Falls Chamber of Commerce said it had gotten the information from sources it knew and believed to be credible.

“We received information that led us to believe there was a cause for concern. As such, we wanted to encourage local business owners to take responsible, precautionary steps for their businesses,” said Jeff Griffin, the group’s president. “We removed the post when we realized it was contributing to a different message that we did not intend.”

Demonizing protesters by associating them with a feared and demonized spectre is a classic tactic to discourage people from exercising their rights to freedom of speech and assembly.

Perhaps the problem with late night comedy shows is they operate largely on the national level, whereas it is the local that needs it most.

Is a local cadre of comedians the remedy we need in this moment of fear and uncertainty?

A group of organized and dedicated funny people who can not only counter spin and debunk all the nonsense and bullshit permeating our public sphere but also entertain and enlighten a stressed out population?

Laughs for liberty!?

Leave a Reply

Your email address will not be published. Required fields are marked *