AI Chatbots Could Pose a Security Threat, UK Spy Agency Warns

ChatGPT might not be as secure as people like to think.

A UK spy agency recently warned the UK and the world that AI chatbots, including ChatGPT, could pose a security threat to individuals and companies using them because they are unprotected at the source.

AI chatbots became increasingly popular in the past few months due to the success of ChatGPT, which OpenAI introduced in late 2022, per HITC.

Chatbot Security Concerns Details

The UK's National Cyber Security Centre, a unit of the intelligence agency GCHQ, recently published a blog post outlining risks to individuals and companies from using "a new breed of powerful AI-based chatbots."

According to the British spy agency, while the chatbot or large language models (LLMs) don't add information from people's queries to its model for others to query, they will be visible to the organization providing the LLM or chatbot, which will then be stored and will almost certainly be used for developing the LLM service or model at some point.

This process means that OpenAI, ChatGPT's provider, can see, read, and store the queries people input to ChatGPT and even incorporate them in some way into future versions. As a result, it creates a risk for sensitive queries, such as when a CEO asks a chatbot about "how best to lay off an employee" or when somebody asks health questions to a chatbot.

Rasmus Rothe, cofounder of AI Investment Platform Merantix, said that the main security issue chatbots pose is that they will learn from those interactions, which could include learning and repeating confidential information, per Business Insider.

Read More: James Webb Space Telescope Captures Huge Star's Dying Days

Additionally, the British spy agency recommends that people thoroughly review and understand a chatbot's terms of use and privacy policy before asking sensitive questions.

The agency also found another risk chatbots pose to individuals and companies - that the queries LLM providers stored in their systems, which may include user-identifiable information, might get hacked, leaked, or accidentally made publicly accessible.

Additionally, an organization with a different approach to privacy could acquire current LLM providers, potentially exposing the queries and information LLM providers stored in the process.

Other risks include criminals using chatbots to write "convincing phishing emails" and helping them execute cyberattacks way above their level.

Solutions For Chatbot Security Concerns

Aside from reading and understanding privacy policies and terms of use, the British spy agency also recommends chatbot users not to include sensitive information in queries to public LLMs or chatbots.

It also advises people not to submit queries to public LLMS like ChatGPT that would lead to issues were they be made public.

Immanuel Chavoya, a senior security manager at cyber security company Sonicwall said that businesses need to guarantee they have strict tech-backed policies in place to control and monitor the use of LLMs to minimize the risk of data exposure, per Telegraph.

However, individuals and companies can forbid their employees from using chatbots altogether, such as in the case of Amazon and JPMorgan.

Meanwhile, OpenAI states that it reviews conversations with ChatGPT to improve its systems and ensure the content complies with its policies and safety requirements.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Tags ChatGPT UK

More from iTechPost

Real Time Analytics