AI deepfakes have reportedly started using even the likenesses of YouTube and TikTok influencers to promote scams and misinformation on social media, according to the Financial Times.
Several online influencers have reported finding their AI deepfakes advertising non-FDA approved drugs or spreading political misinformation about China and Russia.
All of the AI clones were noticeably speaking in Chinese, a language many of the influencers did not know.
Many were only made aware after their followers and friends notified them about the deepfakes after it had already spread across Chinese- or Russian-speaking social media platforms, like Bilibili and China's equivalent to TikTok, Xiaohongshu.
Similar posts on platforms like X (formerly Twitter), YouTube, TikTok, and Instagram, where the influencers usually operate, are far fewer.
It remains uncertain who are the people behind the deep fake campaigns.
AI Deepfakes Becoming More Rampant in Scams, Misinformation Campaigns
The case of these influencers being used in deep fake scams present a worrying trend in the AI technology being used to invade the data privacy of even private figures.
Previously, likenesses of celebrities and politicians, faces that are widely accessible to the public, are being used for AI deepfakes.
The rise of deepfakes based on likenesses of internet influencers means that scammers are now scraping data from regular people's social media.
The more pictures available, the easier it is to digitally replicate the likeness.
As AI deepfakes are still in their "infancy," Reuters warned that deceptive posts will only soon grow in numbers and becoming more misleading as the technology develops unregulated.
How to Protect Likeness from AI Deepfakes?
The best way to limit AI's access to your features and likeness is to lessen the number of posts on social media, particularly posts showing your face.
If posting cannot be prevented, as in most of the cases with online influencers, users can use tools to prevent the AI from replicating the image.
Among the more popular "digital cloaking" tools are Nightshade and Glaze to "poison" AI's data when using your images obtained online.