Mihir Sharma, Tribune News Service
In 2024, democracy will face a test for which it is unready. For the first time since the internet age began, the world’s four largest electoral blocs — India, the European Union, the US, and Indonesia — will hold general elections in the same year. Almost a billion people may go to the polls in the next 12 months.
The stakes are extraordinarily consequential for the future of democracy itself. In the US, the electoral favorite appears to revel in the possibility of becoming a dictator. In the EU, the far right is poised to surge continent-wide. Indonesia’s front-runner is a former general once accused of human-rights violations. And in India, a beleaguered opposition faces its last chance to stave off what may otherwise turn into decades of one-party rule.
We have known since 2016 at least that elections in the digital age are unusually vulnerable to manipulation. While officials responsible for election integrity have been working diligently since then, they are fighting the last war. Former President Donald Trump’s 2016 victory and other votes around that period were influenced by carefully seeded narratives, bot farms, and so on. In response, a small army of fact-checkers emerged around the world and mechanisms to keep “fake news” out of the formal press multiplied. The more scrupulous fact-checkers are, the easier they can be overwhelmed with a flood of fake news. They’re also, unfortunately, human — and therefore too easy to discredit, however unfairly. Some new ideas have begun to emerge. Even Elon Musk’s critics appear fond of the “community notes” he has added to X, formerly known as Twitter, which tag viral tweets with crowd-sourced fact-checks. Because these are crowd-sourced, they respond organically to the amount of fake news in circulation and, because they are not associated with any individual group of fact-checkers, they are harder to dismiss as biased.
Yet technology has moved even faster. AI-based disinformation has already begun to proliferate — and gets harder to spot as fake with every passing month. Oddly, stopping such messages from going viral is harder when they don’t immediately come across as offensive or particularly pointed. In Indonesia, for example, a TikTok video that appeared to show defence minister and presidential candidate Prabowo Subianto speaking Arabic was viewed millions of times. It was an AI-generated deepfake meant to bolster his diplomatic (and possibly his Islamic) credentials.
The threat to democracy is transnational. The platforms being used are global; so is the messaging being deployed. Its defence, therefore, cannot be national. For one thing, it is not a task any government can accomplish alone. For another, it is not a task any one government can be trusted to pursue on its own.
But every country has different approaches when it comes to securing its elections, and both would-be manipulators and the platforms they exploit have taken advantage of this disunity. The level of disinformation that will emerge over the coming year will wash away our individual defences unless we adopt a more strategic and unified approach. We do not yet know what mechanisms — whether crowd-sourcing, or transnational regulation of platforms, or shared norms on speech and de-platforming — will work best. What we will need, however, is to swiftly share information on what measures do seem to work, as well as unified pressure on platforms to adopt them. We can learn from each other: India’s TikTok ban seems to have been more effective than expected, for example. But we must also share a commitment to transparency. Regulators in India and Indonesia must be convinced that US-based platforms’ online norms are designed as much to protect their national cohesion and political integrity as they are to defend northern Californian speech shibboleths.