How Generative AI could spawn a new generation of Disinformation  

There is reason to believe that AI could really be the new variant of disinformation that makes lies about future elections, protests, or mass shootings both more contagious and immune-resistant. Consider, for example, the raging bird-flu outbreak, which has not yet begun spreading from human to human. A political operative—or a simple conspiracist—could use programs similar to ChatGPT and DALL-E 2 to easily generate and publish a huge number of stories about Chinese, World Health Organization, or Pentagon labs tinkering with the virus, backdated to various points in the past and complete with fake “leaked” documents, audio and video recordings, and expert commentary. A synthetic history in which a government-weaponized bird flu would be ready to go if avian flu ever began circulating among humans. A propagandist could simply connect the news to their entirely fabricated—but fully formed and seemingly well-documented—backstory seeded across the internet, spreading a fiction that could consume the nation’s politics and public-health response. The power of AI-generated histories, Horvitz told me, lies in “deepfakes on a timeline intermixed with real events to build a story.”

Matteo Wong writing in The Atlantic