[ad_1]
While Twitter sleuths have pointed to the warped hands and dodgy faces of AI-generated pics, plenty of mainstream users are still vulnerable to this kind of fakery. Last October, WhatsApp users in Brazil found themselves flooded with misinformation about the integrity of their presidential election, leading many to riot in support of losing ex-president Jair Bolsonaro.
It’s much harder to spot blemishes and fakery when someone you trust has just shared an image, at the height of the news cycle, on a tiny screen. And as a fully-encrypted messaging app, there’s little WhatsApp can do to police fake images that go viral through constant sharing between friends, families and groups.
Higgins and “No Context French” were just trying to create a stunt, but their success in getting multiple people to believe their posts were real illustrates the scale of a looming challenge for social media and society more widely.
SOCIAL MEDIA GUIDELINES ON AI-GENERATED MEDIA
TikTok on Tuesday updated its guidelines to bar AI-generated media that misleads. Twitter’s policy on synthetic media, last updated in 2020, says that users shouldn’t share fake images that may deceive people, and that it “may label tweets containing misleading media”.
When I asked Twitter why it hadn’t labelled the fake Trump and Macron images as they went viral, the company helmed by Elon Musk replied with a poop emoji, its new auto reply for the media.
Some Twitter users who framed the Trump images as real with attention-grabbing hashtags like “BREAKING” have been flagged by the site’s Community Notes, which lets users add context to certain tweets. But Twitter’s increasingly laissez faire stance towards content under Musk suggests fake images could thrive on its platform more than others.
[ad_2]
Source link