Perpetual propaganda

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

What is combat in cyberspace?

 

What is the difference between conspiracy and diplomacy? Context and transparency? Or just authority vs innuendo?

Similarly there’s a fine line between propaganda and news when power is involved, and news is increasingly driven by opinion, which provides an excellent opportunity for propaganda.

Let’s close out this week’s look at media, participation, and power, by sharing a few examples that if anything illustrate the complexity and confusion that is often at play.

As part of the strategy to “tell China’s story well”, the People’s Republic of China (PRC) has significantly expanded its public diplomacy efforts. Our study “China’s Public Diplomacy Operations” shows how the People’s Republic of China (PRC) is targeting global social media platforms as part of its public diplomacy efforts to shape public opinion in foreign countries. The report is based on a seven month investigation by the Programme on Democracy and Technology, and represents a global audit of social media activity by PRC diplomats and state-backed media outlets.

This isn’t particular sophisticated, either the research, or the methods being employed by the Chinese diplomatic services.

It does however illustrate the kind of brute force necessary when dealing with algorithms. In this case for the purposes of amplification and influence.

Astroturfing is the phrase used to describe the artificial manufacturing of grassroots sentiment and activity. Social media is ripe for such efforts, and it’s not surprising that this is becoming a staple of international diplomacy.

It’s safe to assume that just like hacking each other, states now engage in all sorts of social media subterfuge and automation.

The best networks are hybrids that combine both human intelligence and automated amplification.

Unfortunately most platforms lean far too heavily on the automatic at the expense of the human.

Following Red Dress Day on May 5, a day aimed to raise awareness for Missing and Murdered Indigenous Women and Girls (MMIWG), Indigenous activists and supporters of the campaign found posts about MMIWG had disappeared from their Instagram accounts. In response, Instagram released a tweet saying that this was “a widespread global technical issue not related to any particular topic,” followed by an apology explaining that the platform “experienced a technical bug, which impacted millions of people’s stories, highlights and archives around the world.

Creators, however, said that not all stories were affected.

And this is not the first time social media platforms have been under scrutiny because of their erroneous censoring of grassroots activists and racial minorities.

Many Black Lives Matter (BLM) activists were similarly frustrated when Facebook flagged their accounts, but didn’t do enough to stop racism and hate speech against Black people on their platform.

So were these really about technical glitches? Or did they result from the platforms’ discriminatory and biased policies and practices? The answer lies somewhere in between.

Activists on both the right and the left experience shadow bans and content removal as a result of political opponents flagging their content. Since the platforms do not have the resources to respond quickly, these bans and penalties can have significant effect.

Especially given the inflationary effect the pandemic has had on the propaganda ecosystem.

The article above is from last year, and it also notes that the research cited does not distinguish between human and bots, but rather bot like activity.

Determining whether someone is a bot or not is going to be a contentious issue moving forward, if it isn’t already.

LIKE SO MANY this winter, Norine McMahon was searching for a Covid-19 vaccine appointment, hitting Refresh on her browser continuously. The Washington, DC, resident was elated to find an opening in late February, but delight turned to disappointment when she failed the captcha user-verification test, even though she swore she entered the letters and numbers correctly.

“Then I would do it really slowly to make sure I was getting it correct, because of course the pressure is on. It happened a dozen times. The captchas weren’t working,” says McMahon, 61, a facilities director who gave up that day but eventually secured an appointment.

The captcha chaos with DC Health’s portal was one of several technical problems widely reported at the time. But captchas have been frustrating users since long before the pandemic.

As you may or may not know, captchas also serve to train AIs and machine learning programs to recognize whatever it is they’re asking us to recognize.

This is part of the reason we’ve seen such a rapid advance in machine learning capabilities, just about everyone on the web has been helping to train various models and applications.

Which brings us back full circle, where we might entertain the idea of automated diplomacy.

At DiploFoundation, as part of our AI humAInism project, we have experimented with how this different approach could look like in the field of diplomacy. Our own Speech Generator is meant as an illustration of what can be done and how it can be done. Diplomats working in the field of digital policy and cybersecurity will find it particularly interesting to experiment with. The Speech Generator allows for selecting an opinion on various key topics on the basis of which a speech is generated.

In contrast to applications like GPT-3, we tried to mimic the human process of writing a speech by using smaller algorithms trained for specific tasks, such as an algorithm for finding keywords and phrases (‘underlining’), an algorithm for recommending paragraphs on a specific topic, an algorithm for summarising paragraphs, etc. As our developer Jovan Njegic would say, ‘in this way, we try to form a system of interconnected algorithms, which imitate not the results of the writing process, but the human process of reasoning during speech-writing’. This also means that if a result is not appropriate, the user can go back and tweak the process. Our speech generator is an illustration, not a fully fledged application for diplomats, but it might just point us in the right future direction.

This seems like a potentially bad idea. Diplomats as robots or rather robots as diplomats does not convey a responsive foreign service. Rather it evokes a new era of automated propaganda.

Today’s ask: enjoy your weekend. We’ll be back for Tuesday. #metaviews

One thought on “Perpetual propaganda

  1. Thank you dear Jesse for linking to DiploFoundation’s tweet regarding my blog post. I am glad you found it useful for your article! I would like to clarify that I believe (as stated in my article) that ‘neither diplomats nor human speech-writers are likely to be replaced anytime soon’. We are also not advocating for that. AI text-generation might a tool for diplomats to support the speech and report writing process. The dangers of misusing automated text-generation on social media, in a public diplomacy context, are real. I share the concerns about this, but this was the focus of my article or our work in this area. Feel free to contact me for further resources and ideas about this topic.
    Yours, Katharina Höne

Leave a Reply

Your email address will not be published. Required fields are marked *