How to confront disinformation during #LGE2021

South Africa’s municipal elections will be held on November 1, 2021. Elections are usually a time where misinformation and fake news abound. Code for Africa (CfA)’s senior investigations manager Allan Cheboi and investigative data analyst Leon Vambe have put together a useful list. This list will help people discern misinformation and disinformation. 


2021 Local Elections: Everything you need to know

What is misinformation?

This is usually false information that is spread unknowingly, usually without the desire to cause harm. Think of that innocent Covid-19 false information shared by an aunt in a family WhatsApp group. As such, the person who shares the message doesn’t intend to deceive the recipient of the information.


A guide to keeping the family Whatsapp group fake news-free

What is disinformation?

Disinformation on the other hand, is deliberately false information or hoaxes. It is spread with a clear intention to deceive or cause harm to the recipient of the information. An example would be a piece of election propaganda against an opposing candidate, which ends up polarising citizens against the candidate.This can lead to violence or even death when it leads to protests and ethnic violence. 

A perfect example can be seen in the recent Uganda election. The ruling party used disinformation to polarise citizens against the opposition party leading to protests and the death of around 45 people.

Why is it important to understand the difference between the two?

It is important to understand the difference because disinformation is more dangerous and harmful. The individuals or entities sharing those messages tend to expect impact. This means that they use deceptive behaviours such as use of online social media bots, fake accounts, sockpuppet accounts to amplify the message so that it reaches the masses. 

How can people tell the difference between these?

Misinformation is usually based on the content of the text, image or video shared. However, disinformation focuses on both the content being shared and the behaviour used to share the content. 

For example, a false video about an election candidate, X, can be shared by a citizen because they support another candidate, Y, in the election. However, Y’s election team can use deceptive behaviours such as coordinated social media amplification of the video on platforms such Twitter. They can create things such as hashtags to make the video trend as a measure to tarnish X’s legitimacy to the political seat.

How can people identify this?

Misinformation can be identified by conducting content verification and fact checking. Citizens need to develop a critical mind for every piece of content they see online. They need to be able to question the accuracy of the information. Tools such as We Verify’s Invid, Google reverse Image search can be used to verify the authenticity of images and videos we find online.

Disinformation identification process is more complex. For example, Code for Africa’s iLAB team uses Social Network Analysis and Botspotting to identify content that is being shared in a coordinated way. For example, a social network diagram of a trending hashtag on Twitter enables us to see the influential accounts within the hashtags. This shows the accounts that have posted the highest number of tweets within that hashtag. We use tools such as botometer and Truthnest to determine if an account used to share disinformation content is a bot.

Where can people report disinformation? Is it different if it’s an ordinary person versus a political party?

Social media users can report suspicious accounts spreading disinformation content on the social media platform they are using. For example, Facebook provides a mechanism of reporting false content which can be found here. Twitter also has a reporting mechanism for posts that violate their rules, policies and guidelines.

Code for Africa’s iLAB have partnerships with social media platforms to help them investigate and report any coordinated disinformation campaigns. This structure can be used by organisations that have particular interest in long-term research into disinformation.

How can people monitor fake news etc on platforms like Whatsapp where it’s less public compared to Twitter/Facebook?

At the moment, we do not have tools that can monitor disinformation on platforms such as WhatsApp, Telegram and Signal. This is mainly because of privacy concerns. During one of iLABs’ recent investigations, we noticed the shift of disinformation spread from Facebook and Twitter, to WhatsApp and Telegram. 

We currently employ two strategies in researching disinformation on such platforms:

  1. Crowd-Sourcing for information: Citizens are now enlightened to question content they see on WhatsApp groups and Telegram channels. Organisations such as Code for Africa’s PesaCheck, provide a WhatsApp tipline where users can share suspicious information they find on  WhatsApp or further verification.
  1. Using sentinels to collect information:  We have also trained local in-country sentinels in some of our African countries to identify publicly available links to WhatsApp groups and Telegram channels. They then join such groups and share any suspicious content that they find for further analysis and verification.


Nationalist pro-South African hashtags spread hate online

The interview has been edited for clarity and brevity. 

Featured image via Pixabay