Alexa's latest feature can mimic the sound of almost anyone, alive or dead, by learning from just a minute's worth of audio.

Alexa Can Impersonate Almost Anyone's Voices
Rohit Prasad, the company's senior vice president and head scientist for Alexa, announced a slew of new and soon-to-be-implemented functionalities for its well-known smart assistant during the annual re:MARS conference that Amazon hosted in Las Vegas, U.S, according to Mashable and Android Authority.
The most intriguing feature of all, though, was Alexa's new ability to artificially synthesize brief audio samples into realistic spoken passages, or to put it another way, to take a voice sample and use it to produce full spoken phrases in the same tone and timbre.
During the event, the Alexa team demonstrated this functionality by providing a scenario in which Alexa reads a bedtime story to a young boy in the voice of a deceased grandma.
Watch the video below:
In comparison to previous similar technologies, which would have required hours of studio work, Prasad explained that Alexa could mimic just about anyone's voice after listening to a recording of it for less than a minute.
While AI cannot completely take away the sorrow of losing a loved one, Prasad asserted on stage that it "can definitely make their memories last."
Is Alexa's New Feature Good or Bad?
The scope of what intelligent voice assistants can do is expanding at an astounding rate.
Indeed, thanks to developments in machine learning, AI has reached new heights of human-like behavior. However, not everyone may applaud a feature like this.
In fact, the decision by Amazon to teach Alexa to imitate human voices was met with mixed reactions. The majority of individuals are particularly worried about the potential for criminals to abuse this technology. For starters, it can be used to maliciously deepfake living persons' voices.
Rachel Tobac, a hacker and the CEO of SocialProof Security, is worried about this technology's potential phone attack implications because it may be used for impersonation.
The phone attacking implications of this tool are not good at all — this will likely be used for impersonation.
— Rachel Tobac (@RachelTobac) June 22, 2022
At Amazon’s re:MARS conference they announced they’re working to use short audio clips of a person’s voice & reprogram it for longer speechhttps://t.co/5TkEIHoeXG
Tobac also tweeted: " But the easier it is to use, the more it will be abused. And this sounds like it may be pretty user friendly."
The new Alexa feature's security has also drawn criticism from Twitter user @bitty in pink. The netizen claimed that criminals might use it to call someone's family members and beg them to give money or ask them for their bank account details or social security numbers.
Umm, so how soon will criminals be able to use it to call your family members begging them to Venmo cash? Or ask them for social security numbers? Or bank information?
— 🌈bitty_in_pink 💫 (@bitty_in_pink) June 22, 2022
Twitter user @themaltesemama, who lives 3000 miles away from her parents, is in favor of this new function since she can utilize it to help her parents with dementia age while in their house.
i hated it too until i installed amazon alexa devices throughout my parents home to help them age in their house with their dementia when i live 3000 miles away. we have caregivers going daily but being able to peak in or even better drop in with a video call is amazing.
— maltese mama 🇺🇸💙🇺🇸 (@themaltesemama) June 22, 2022
Although there is no information now available regarding this new feature, it wouldn't be at all surprising to see it implemented soon, given how swiftly technology is evolving.
What are your thoughts? Should AI personal assistants be permitted to imitate human voices, especially those of the deceased?
Related Article : Microsoft Has Decided to Restrict Access to Its Facial Recognition Tools