AI deepfakes are becoming more rampant online as not only celebrities and notable personalities have fallen victim to the technology but regular online users, as well.
With the technology's emerging threats to personal safety and privacy, there are ways people can employ to protect their likenesses from being used in AI deepfakes.
Minimize Use of AI-Powered Filters
Over the past years, social platforms like Facebook, Instagram, and TikTok have introduced a lot of filters users can test out.
These filters, often powered by AI, collect huge amounts of data, including biometrics information, from users to generate images automatically.
A 2021 lawsuit against TikTok highlighted such concerns as the video-sharing site was accused of collecting private and personally identifiable data of millions of users for targeted ads.
While TikTok maintained its innocence from using illegally collected information to sell to ad companies, the case still sheds light on the true purpose of these filters millions of people use every day.
With many tech companies starting to leverage more AI into their operations, it is not surprising to see if the new and flashy filters we see today also serve as data collectors.
To prevent personal biometrics data from being illegally collected, it is recommended to reduce or stop using the filters on social media.
Add Data Poisoning Tools Before Posting Pictures
While reducing personal pictures available online could minimize exposure to deepfakes, this strategy is not applicable to everyone.
An alternative solution could be the use of other AI tools to fight AIs. This is where "data poisons" like Nightshade come in, AI filter tools that add invisible watermarks on pictures that prevent learning machines use the image as part of its training data.
Even a simple addition of a warbled filter set with a low opacity can prevent AI from properly replicating people's likenesses.
This method can also be done on the phone via art editing tools where users can stack layers of pictures.
Timbre Watermarking
Users can also add invisible watermarks on all audio clips uploaded online to detect whether their voice is being replicated digitally.
One example is timbre watermarking, a process that embeds digital identifiers in a voice recording to protect audio clips from being impersonated illegally.
Unlike adding watermarks on images, inserting timbre watermarks on audio clips requires a bit of knowledge in coding as there are only a few free tools available online that can add such digital markers.