ChatGPT Advanced Voice Mode Cloned User’s Voice During Testing, Tester Calls it ‘Black Mirror’

Technology gone wrong has been the top premise of Netflix's Black Mirror. Still, scenarios in the real world have been compared to this fictional show, particularly when it comes to OpenAI's ChatGPT Advanced Voice Mode. Currently in its limited testing, ChatGPT's Advanced Voice Mode has reportedly cloned a user's voice accidentally despite not being prompted to do so. 

Its participant in the said testing said this could be the next plot to Black Mirror.

ChatGPT Advanced Voice Mode Cloned User's Voice in Testing

ChatGPT

(Photo : Jaap Arriens/NurPhoto via Getty Images)
A new release from OpenAI reveals an update regarding its testing of ChatGPT's Advanced Voice Mode, a feature that will allow users to converse with the AI in a voice-to-voice setting. In its release of the "system card," the company talked about what it found during its testing of the latest feature, including the tech's accidental cloning of a user's voice. 

This system card keeps a score of ChatGPT's preparedness for these kinds of risks that come with the new Advanced Voice Mode, especially with its red teaming procedures.

In the said incident, ChatGPT was introduced in a noisy situation, which led to an unprompted incident of the AI's Advanced Voice Mode copying a user's voice and generating an output.

Read Also: OpenAI CEO Sam Altman Wants to 'Revamp' ChatGPT Again

Limited Testers Claims It is Like Netflix's Black Mirror

A post from BuzzFeed's Data Scientist, Max Woolf, claimed that OpenAI "leaked" a Black Mirror episode's plot, with its detailing of what happened when ChatGPT copied the user's voice. 

Other users who claimed that they were part of the alpha testing also experienced accidental voice cloning, saying that this incident "threw" them off.

ChatGPT's AI Gone Wrong Moments

OpenAI may be one of the most renowned AI companies in the world, with ChatGPT being among the most popular and recognizable systems in the world. However, it has experienced a number of controversies and issues in recent months. One of the many problems users have with ChatGPT is its hallucinations, a complication that most companies offering AI face. "Hallucinations" refer to instances wherein AI fabricates information that end up being misleading or fake.

Among the many other issues with OpenAI's large language model (LLM) training is its usage of user data. It should be noted that it has since offered ways to limit one's settings. 

Moreover, the company also previously faced lawsuits regarding its unauthorized access to online content, particularly from authors of books and novels. The plaintiffs of these lawsuits have claimed that the company scraped data off the web.

Given that AI is a relatively new in the tech world, many are calling for AI regulation among tech companies, especially when it comes to its training and development. The government and other organizations are already working on such regulations.

Related Article: OpenAI Delays GPT-4o's 'Voice Mode' to July Over Technical Issues

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost