Op-Ed: Disinformation, Big Tech, and the 2024 Elections – Are Platforms Doing Enough?
BY Jean-Andre Deenik
The year 2024 will be remembered not just for its monumental electoral significance, with over 70 countries holding elections, but also for the enormous challenges presented by digital disinformation and misinformation. In an age where billions rely on online platforms for news and political discourse, companies like Google, Meta, and TikTok have found themselves at the forefront of democracy. With a rising tide of false information threatening to distort electoral outcomes, these platforms’ responses are under increasing scrutiny. But are their measures truly adequate to protect the democratic process, or do they fall short?
Election Plans: A Promising but Incomplete Step
Acknowledging their potential role in undermining electoral integrity, tech giants such as Google, Tiktok, and Meta have made their election plans public. ” However, a critical evaluation of these plans against international best practices, such as the European Commission’s Digital Services Act (DSA) and the International Foundation for Electoral Systems’ (IFES) guidelines, reveals significant gaps and oversights. Google, Meta, and TikTok’s plans might appear comprehensive in theory, but they fall short in key areas, particularly when it comes to post-election reviews, transparency, and proactive disinformation countermeasures.
Each of these platforms seems to lack a solid strategy for assessing the effectiveness of their methods. Neither Google, Meta, nor TikTok includes provisions for post-election reviews or mechanisms that allow for public feedback on the risk mitigation measures they implemented. This absence of introspection raises concerns about whether these companies are truly committed to evolving their approaches based on past elections or whether their efforts are performative, aiming more at appeasing critics than solving the actual problem.
On the positive side, Google, Meta, and TikTok have demonstrated a commendable initiative with regards to the South African 2024 Elections by collaborating with the Independent Electoral Commission (IEC). Additionally, Meta and TikTok have engaged with independent organizations such as Media Monitoring Africa and Code for Africa. Enhanced cooperation between advertising technology companies and civil society organizations aims to uphold free and fair elections.
The Threat of Disinformation
Disinformation is not just an abstract threat; it’s a concrete force capable of undermining citizens constitutional right to vote. The South African context is no different. Misinformation can warp perceptions, manipulate voter turnout, and distort the playing field, leaving certain political groups disadvantaged. This is especially alarming when we consider the role these platforms play in shaping public discourse. Their algorithms, moderation policies, and responses to electoral disinformation can significantly influence what millions of voters see and believe.
At the heart of this issue is the challenge posed by disinformation’s sheer scale and sophistication. Whether it’s AI-generated fake news, manipulated political ads, or outright hoaxes, the capacity of online platforms to swiftly detect, evaluate, and neutralize such threats is being put to the test. The election plans rolled out by Google, Meta, and TikTok represent their effort to address this digital maelstrom. However, it is crucial to assess the effectiveness of these plans in practice.
Cases of platforms like Meta (Facebook) failing to remove disinformation under the guise of free speech are rampant. This is especially dangerous when such content targets journalists, undermining the very essence of press freedom, which is crucial in the fight against disinformation. By neglecting to robustly address these issues, tech companies may be inadvertently contributing to the erosion of democratic norms.
Moreover, the over-reliance on top-down regulation to combat disinformation, without investing in bottom-up empowerment, creates a lopsided response. Governments and electoral commissions could collaborate with these platforms to promote reliable information, but the tech companies seem more focused on avoiding governmental backlash than empowering the electorate with trustworthy sources.
Evaluating Big Tech’s Election Plans
The platforms’ election plans, while a necessary step, fall short in key areas, particularly when benchmarked against international best practices like those outlined by the European Commission’s Digital Services Act (DSA) and the International Foundation for Electoral Systems (IFES).
Google’s election plan suffers from a lack of transparency, particularly when it comes to assessing the effectiveness of its mitigation efforts. The absence of post-election reviews and public feedback mechanisms raises concerns about accountability. How can we trust that these measures work if there is no evaluation?
Meta
Meta has a similar blind spot, failing to outline plans for post-election reviews and feedback. Moreover, despite its efforts to combat disinformation, its vague references to AI leave much to be desired. AI is one of the biggest players in the creation and detection of false content. If Meta truly intends to address the issue, it needs to make its AI strategy far clearer and more prominent.
TikTok
While TikTok has made waves as a new player in political discourse, its election plan also overlooks post-election reviews. Given the platform’s popularity among younger users—a key demographic in many elections—this lack of transparency is worrying. If TikTok wants to be taken seriously as a responsible platform, it must implement better systems for evaluating its efforts.
Lack of Understanding of South African Context and Global South at Large
One glaring issue with the election plans of Google, Meta, and TikTok is their vagueness regarding moderation practices, especially in Global South countries like South Africa. While these platforms may dedicate significant resources to moderating content in Western countries, the same level of investment is not evident in regions with diverse languages and cultures. In South Africa, uncertainty about the number of moderators fluent in local languages like Zulu, Xhosa, and Afrikaans is a major blind spot in these plans. Without local language moderators, harmful content may slip through the cracks, spreading unchecked and disproportionately affecting communities who have been historically marginalised .
Furthermore, there is an evident lack of clarity regarding how much of these platforms’ resources are allocated to election-specific content moderation in these regions. While the election plans tout general initiatives for content oversight, they rarely disclose the specific measures for ensuring that local contexts are properly accounted for, underscoring a disconnection between their global narratives and local realities.
The Larger Picture: Disinformation and Democracy
The battle against disinformation cannot be fought by platforms alone. Governments, civil society, and international bodies all have a role to play in safeguarding electoral integrity. Research shows that sometimes regulatory approaches from the top fail to empower the public to resist disinformation. Internet companies like Meta and TikTok, which often hide behind the guise of “free speech,” have allowed disinformation to persist under the guise of protecting individual rights.
As more elections take place in 2024, the role of tech giants in safeguarding electoral integrity cannot be overstated. While Google, Meta, and TikTok’s election plans are a step in the right direction, they fall short in critical areas, especially in relation to local contexts in countries like South Africa. To truly mitigate the harm caused by disinformation, these platforms must prioritize transparency, post-election reflection, and a global commitment to content moderation that respects the linguistic and cultural diversity of their user bases. The integrity of democracy depends on it.