With the continuous advancement of AI, we can expect the technology to get integrated into several systems in various industries. Even with that likely scenario, it's still hard to ignore the fact that most, if not all, AI models still need some tweaking to address certain ethical issues.
AI Risks in Healthcare
The healthcare sector deals with several aspects of patient care, including the diagnosis of conditions and storing patient data. While the former would benefit from the processing capabilities of AI models, the latter might pose security risks in terms of personal data storage.
Researchers from the University of Oxford found that some healthcare providers use AI chatbots like ChatGPT to create care plans for patients, and Oxford's Institute for Ethics in AI research fellow Dr. Caroline Green uncovered problematic points to the practice.
There's a reason why companies dealing with user data have walls upon walls of cybersecurity, and that's because that information in the wrong hands could be disastrous, not to mention the legal ramifications for the company that dropped the ball.
If any type of personal data was put into a generative AI chatbot, "that data is used to train the language model," Dr. Green expressed, per The Guardian. As a result, that data might turn up in the chatbot's responses. All a user needs is to type in the right prompt.
Not only that, chatbots are not exactly reliable when it comes to credible data, given that they tend to hallucinate and make up answers. Healthcare providers might act on biased or false information which could be harmful to the patient.
Despite the risks, the benefits should not be ignored, especially since these perks can be utilized when the AI chatbots are safe to use with ethical problems. For one, it could lessen the workload of medical practitioners, which is a known struggle in the industry.
"It could help with this administrative heavy work and allow people to revisit care plans more often. At the moment, I wouldn't encourage anyone to do that, but there are organizations working on creating apps and websites to do exactly that."
AI Chatbots for Personal Diagnosis
It's not just medical establishments that could benefit from the development of AI models. While it's not advisable to use AI chatbots for a direct condition diagnosis, it has extensive knowledge of symptoms and solutions that might point patients in the right direction.
For example, 32-year-old Katie Sarvela was experiencing several symptoms but had no clue what could be causing them. She listed the symptoms down on ChatGPT, and after saying that it couldn't diagnose her since the chatbot wasn't a doctor, it mentioned multiple sclerosis.
The symptoms that Sarvela mentioned were night blindness, half of her face feeling like it was in fire or numbness, her skin feeling wet, and so on. Both Sarvela and her neurologist were amazed by the fact that ChatGPT managed to jump to the right diagnosis, as per CNET.
Of course, users will still need a doctor's consultation and tests to confirm their suspicions, but using chatbots like ChatGPT could narrow down the potential conditions, saving time and resources.