In countries around the world, 2024 will bring elections. And something new is causing a lot of worry – disinformation generated by AI. How worried should we be about AI disrupting elections?
In the past, disinformation has always been created by humans. Advances in generative artificial intelligence, or Gen AI mean there are models that can spit out sophisticated essays and create realistic images from text prompts. This makes synthetic propaganda possible, says The Economist in an analysis published last week.
Fear looms large that disinformation campaigns may be supercharged in 2024, just as countries with a collective population of some 4 billion — including the US, the UK, India, Indonesia, Mexico and Taiwan — prepare to vote.
How worried should citizens be? Consider these points that The Economist makes:
- Generative AI models like ChatGPT can create sophisticated text and realistic images, raising concerns about their potential to spread disinformation and sway elections.
- However, voters are hard to persuade on major political issues. Existing campaigns only produce small shifts in voter behaviour.
- Tools to make fake images and videos have existed for years. Generative AI makes it easier, but effort was not previously the limiting factor.
- AI-generated disinformation could undermine trust between citizens, but it’s unclear if it benefits one party systematically.
- Social media platforms and AI firms say they are trying to address risks, but voluntary regulation has limits.
In sum, panic is unwarranted, says The Economist – people spread terrible ideas without new technology. The 2024 US election will see disinformation at major scale, The Economist adds, but the source will be Trump, not AI.
Do we agree with that conclusion? Listen to episode 352 to hear our discussion on this issue.
(Image at top by Bing Image Creator to this simple prompt: “AI disinformation creator”)