Congress Prohibits Microsoft Copilot from Its Devices Due to Potential Leaks

The government needs to be extra cautious regarding the apps used on its devices, given that their access could lead to the leak of sensitive information. This is why ChatGPT and TikTok have been banned from the mentioned devices, and now Microsoft Copilot joins that list.


(Photo : Jonathan Raa/NurPhoto via Getty Images)

Copilot Blocked on Congress-Owned Devices

US Congressional staff can still use the chatbot on their own devices, but it is now prohibited from devices owned by the governing body. This stemmed from the concern that the chatbot be the cause of the leak of sensitive data to "non-House approved cloud service."

The new policy was announced through a memo from House Chief Administrative Officer Catherine Szpindor, officially prohibiting Copilot just like ChatGPT was a year ago. It was deemed too risky by the Office of Cybersecurity, as reported by Engadget.

The US government is becoming more strict when it comes to anything relating to AI, especially with all the concerns and issues that have come to light since its emergence, from the unconsented use of training data to the chatbot revealing private information.

OpenAI chatbot ChatGPT does differ in one aspect. It has not been completely restricted to government-issued devices. The paid version, ChatGPT Plus, has better privacy controls. With the level of security, it is being used for research and evaluation.

Microsoft is already working on a Copilot version that would be better suited to handle sensitive government data. The new version of Microsoft 365's Copilot assistant with have higher levels of security, although it will have to be evaluated first before approval for use.

Read Also: Certain ChatGPT Prompts Can Generate Sensitive, Copyrighted Data

How Can Chatbots Be Security Risks?

Chatbots such as Copilot, ChatGPT, and Gemini are capable of answering complex questions due to the data it was trained with. The data in question are taken from the internet and other dataset sources. What others might not know is that the chatbot learns from conversations with it as well.

What you type can be saved, which is why you need to be careful about the kind of information you reveal to the AI chatbots. Companies like OpenAI offer options for users to disable chat histories so they cannot be used for training, but that still might not be enough.

With the right prompts, threat actors might be able to pull information from the chatbot. This has already happened before as researchers tried to look for vulnerabilities. While the issue has since been fixed, ChatGPT provided private data when it was asked to repeat random words.

In a blog post, the researchers detailed that after the words were repeated, the chatbot eventually dropped a real email and phone number from an unsuspecting entity. "It's wild to us that our attack works and should've, would've, could've been found earlier," the researchers wrote.

Hackers and other bad actors might be able to find another exploit that would expose vulnerabilities in AI models, prompting them to reveal more sensitive data. Since it learns from conversations, all it could take to draw out House data is the right prompt.

Related: ASCII Art Can Be Used to Circumvent Chatbot Restrictions Preventing Harmful Responses

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost