Microsoft's AI-powered "Recall" feature might be more dangerous than initially thought as researchers confirmed its high vulnerability to hacking exploits.
In a blog post on Monday, Google Project Zero researcher James Forshaw highlighted security and privacy risks the new Copilot feature posed to its users and Microsoft's database.
According to Forshaw, hackers can easily access the "Recall" feature's database even without faking an administrative grant by just re-writing a few program files.
Once accessed, third-party actors can steal and replicate "all those juicy details" the Copilot AI has recorded on the device.
As of writing, Forshaw and other security experts have yet to determine whether threat actors have already exploited the vulnerability.
Microsoft has yet to respond to Foreshaw's latest revelations.
What is Copilot 'Recall' Feature?
The "Recall" feature is part of the recently released Copilot+ PCs system that allows Microsoft's chatbot to remember everything the users have "seen or done on your PC" as a glorified "spyware."
The feature basically works by silently taking a screenshot on the user's desktop at least every five seconds and storing it within its systems for the user to re-access exact chatlogs, tabs, and apps anytime.
While Microsoft assured that all data are stored offline, it did not stop many people from calling out against the feature's dangers, especially given Microsoft's track record in cybersecurity.
Microsoft Scrutinize for 'Inadequate' Safety Measures Amid AI Ramp-Up
Just earlier in April, the Cyber Safety Review Board scrutinized Microsoft for the lack of safety measures after it reported a corporate-wide cyberattack last year.
According to the security board, the data breach was "preventable" if not for the "deprioritized enterprise security investments and rigorous risk management."
This is separate from a massive data breach that hit the company in January that resulted in Microsoft's senior leadership emails and corporate source code being accessed by hackers.
Microsoft claimed that the cyberattacks did not affect users' data, although security experts recommend an "overhaul" of the company's safety systems.
It does not help that the tech giant has already reported instances of its chatbots and OpenAI being used to automate state-sponsored cyberattacks in the US.