LITTLE PUSHBACK AGAINST AI FAKES
For the Musk faithful, all this juvenile irreverence is what makes him so compelling. But we’re likely to see a closely fought US election this November, and the stakes are too high for recklessly posting half-truths.
Experts in online misinformation tell me that, anecdotally, Harris has already become a greater target of deepfakes than Trump. With close to 200 million followers and the ability to tweak X’s recommendations or boot people off the platform, Musk can do more than just boost shares of Tesla or cause humiliation: He can influence thousands of voters in swing states.
And if Elon can break the rules on posting AI-generated voices, there’s a good chance that others will do the same. Musk hasn’t only shown how much traction well-designed AI fakery can get on his site but how little pushback it can get too.
Audio deepfakes can be insidious. They are increasingly difficult to distinguish from real voices, hence why they have quickly become a favoured tool for scammers.
One in 10 people has reported being targeted by an AI voice cloning scam, while 77 per cent of these targets lost money to the fraud, according to a 2023 survey by cybersecurity firm McAfee. Another study found that humans in a lab setting could detect AI-generated speech about 70 per cent of the time, suggesting that in the wild, voice clones are getting harder to discern as they become more sophisticated.
Discussion about this post