OpenAI, Meta, Google, and other AI start-ups are reportedly planning to launch new protective measures to fight online child exploitation as concerns rise up amid the popularity of AI.
According to The Wall Street Journal, major AI firms are teaming up to further impose restrictions against users abusing their AI to create sexualized images of real-life children.
The efforts will be led by Thorn, a nonprofit advocacy group focused on combating child sexual abuse online, focused on blocking services that "nudify" images of women and children.
The announcement comes after Stanford University research highlighted the need to improve law enforcement on online child safety before AI makes the problem worse.
According to the study, Cybertipline, the US's dedicated tipline for child exploitation, "will just be flooded with highly realistic-looking AI content" making it harder for officials to rescue actual victims.
The companies have previously formed several alliances since last year, pledging to emphasize "safety, security and trust" in the development of AI technology amid concerns about the technology's dangers to online safety and data privacy.
Child Abuse, Exploitation Increase Amid Innovations in Deepfake Tech
Concerns on the dangers of AI perpetuating child sexual abuse and exploitation as the technology continues to produce more realistic-looking images while protective measures lag behind.
Earlier reports from the Federal Bureau of Investigation even hinted at an online "black market" for pedophiles and sexual predators using images of real-life children on AI image generators.
The technology was also reportedly used as a form of cyberattack, in which some cases led to the victim committing suicide.
However, the problem was only fully realized after explicit deepfake images of American singer Taylor Swift went viral earlier this year, raising concerns among both parents and lawmakers on the real risks brought by the technology in the unregulated online space.
Tech Firms Criticized for Collecting Children's Personal Data
Despite the numerous pledges made, several companies involved in the alliance have also been accused of putting children on their platform at risk.
Meta and Google in particular were in hot waters numerous times after federal investigations found the tech giants illegally collecting data from minors to sell to advertisers.
In 2019, YouTube, a subsidiary of Google's parent company Alphabet, was fined $170 million for collecting children's personal data without their parents' or guardians' consent.
Much more recent was the court findings last November where it was found that Meta is secretly
The Federal Trade Commission has already warned AI firms to avoid using deceptive methods to collect data from people as part of their training datasets while proper regulations on the technology remain pending.