


Producing fake information is getting easier
But that’s not the whole story, when it comes to AI
When it comes to disinformation, “social media took the cost of distribution to zero, and generative AI takes the cost of generation to zero,” says Renée DiResta of the Stanford Internet Observatory. Large language models such as GPT-4 make it easy to produce misleading news articles or social-media posts in huge quantities.
And AI can produce more than text. Cloning a voice using AI used to require minutes, or even hours, of sample audio. Last year, however, researchers at Microsoft unveiled VALL-E, an AI model that is able to clone a person’s voice from just a three-second clip of them speaking, and make it say any given text.
Explore more

Disinformation is on the rise. How does it work?
Understanding it will lead to better ways to fight it

Fighting disinformation gets harder, just when it matters most
Researchers and governments need to co-ordinate; tech companies need to open up

The truth behind Olena Zelenska’s $1.1m Cartier haul
The anatomy of a disinformation campaign