OpenAI Still Unsure About Releasing Its DALL-E 3 AI Image Detector

There has been a cloud over the heads of AI companies for many reasons, one of which is its generative AI tools being misused. OpenAI created an AI image detector to try and fix some of the current and future issues, but they are still debating whether or not it should be released.

OpenAI
(Photo : Didem Mente/Anadolu Agency via Getty Images)

OpenAI's Image Detector

The AI company has been discussing about the detector tool, specifically when it should be released as it can determine whether an image is created by its own AI image generator, DALL-E 3. However, it seems that OpenAI can't get to a decision.

OpenAI researcher Sandhini Agarwal said that there's a question of "putting out a tool that's somewhat unreliable, given that decisions it could make could significantly affect photos, like whether a work is viewed as painted by an artist or inauthentic and misleading."

Agarwal expressed that while the tool's accuracy is "really good," it still did not reach OpenAI's standard for quality. However, OpenAI CTO Mira Murati claims that the AI image detector was 99% reliable upon scanning an unmodified image generated by DALL-E 3.

According to an unpublished blog post shared with Tech Crunch, the AI image classifier for DALL-E3 "remains over 95% accurate when an image has been subject to common types of modifications, such as cropping, resizing, JPEG compression," and more.

The company might be going over and beyond with perfecting the detector as it has had bad experiences with such tools before. For instance, the AI-generated text detector it released before was not as accurate as they had hoped it would be.

It could detect not only AI text from its own products but others as well. Eventually, the company had to take the tool down due to its low rate of accuracy. Like AI image detectors, it can be damaging when a tool mistakenly identifies something original as AI-generated.

Agarwal implied that it's also unclear how to determine whether an image is AI-generated, even if it has already been edited outside the AI tool. "Right now, we're trying to navigate this question, and we really want to hear from artists and people who'd be significantly impacted by such tools."

The tool that OpenAI is developing only works on images generated by DALL-E 3 because it's a "much more tractable problem." However, OpenAI is not dismissing a general detector entirely and might look into it depending on how the current one fares.

Read Also: OpenAI is Offering Grants to Anyone Who Can Help Make AI Ethical

Why a Detector is Important

It's not just about artists claiming that they created an artwork on their own despite using generative AI tools. A detector could also be useful in the ongoing war against misinformation. With these tools, people can create media that can support false claims.

There have already been several cases where deepfake has been used in social media, especially in advertising. Several famous personalities like Tom Hanks and MrBeast have been victims of this kind of fraudulent activity.

With an AI image classifier tool, people would have an easier time detecting whether an image is genuine or not, especially since not everyone has the skill to identify whether an image is fake or not.

Related: OpenAI Plans to Produce Own AI Chips

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost