It will be a while before AI-generated content will not come with problems, and that day might not even arrive. Companies and developers continue to create tools to detect them and separate original photos from AI-generated ones.
'Verify' Website
We now have access to watermark technology that cannot be seen by the naked eye. It's also unaffected by photo alterations no matter what the editor does, but this tech has mostly been applied to images generated by generative artificial intelligence.
With the web tool called "Verify," you can now check the authenticity of an image without charge. Several parties have banded together to back the tools, which include news organizations, camera manufacturers, and tech companies, as reported by PC Mag.
The site is already supported by camera tech giants like Sony, Canon, and Nikon. The image that the site can scan needs to have a digital signature in order for it to work. When it does, information like the date, location, and other authenticating data will be available.
The mentioned camera companies already have or are developing cameras that can provide such digital signatures. For instance, Nikon already has mirrorless cameras in the market with built-in authentication technology.
Sony and Canon are also set to release a lineup of camera models that can add a digital signature to photos for the same function. Canon has already tested the technology last year in October and might be released soon.
This could be a useful tool, especially for photojournalists or anyone who makes a living through taking raw images. Other tech giants have released tools that can do the same, particularly those who are also developing generative AI tools.
Google DeepMind SynthID
Just as camera and news companies are developing watermark tech for original photos, companies like Google have also found a way to put a tag on AI-generated images so people cannot pass them off as real photos.
Google DeepMind helps add a watermark that cannot be seen in the photo. Through SynthID, AI tools will automatically put a mark on the photo which can also be detected by the same tool, as reported by Interesting Engineering.
They are small enough as they are embedded into pixels. Filters, color composition changes, and lossy compressions will not affect the markers. Google acknowledged that AI can "unlock huge creative potential, it also presents new risks, like enabling creators to spread false information."
This has been true for the last year. A lot of deepfakes are emerging on social media, and some people are fooled by it, which fuels misinformation. Bad actors can use this to their advantage on a larger scale, which can be dangerous.
Several famous individuals have already been pulled into generative AI issues. Tom Hanks, for instance, had a video of him endorsing a dental plan, although it was entirely created using generative AI. The actor had to clarify through Instagram that he had no part in the ad.
One of the most popular deepfakes that spread across the internet was that of the Pope wearing a white puffer jacket, dubbing the image "Balenciaga Pope." Many expressed that they did not realize it wasn't real until others pointed it out.