Artificial intelligence is making its way into several industries, so it's to be expected that countries will reach for the opportunity to develop technologies for various uses. China aims to contribute to the healthcare sector by developing a chatbot that will aid surgeons.
AI Chatbot For Surgeons Using Llama 2.0
The chatbot is called CARES Copilot 1.0, and it is capable of answering a wide range of medical queries, even managing to provide citations to make sure that the response is credible. It pulls its answers from millions of academic records.
The Chinese government is supporting a Hong Kong-based agency that is currently developing the AI bot, which is under the Chinese Academy of Sciences. It's the country's foremost state-backed scientific institute, as reported by Interesting Engineering.
It goes beyond answering questions as well, as it can help with the medical staff's workload with its ability to process data such as images, audio, texts, MRI, ultrasounds, and CT scans. This makes the AI tool an effective assistant to healthcare practitioners.
While Meta's Llama 2.0 is used as the basis, The Chinese Academy of Sciences calls the AI model TaiChu, and it has already many applications even before CARES Copilot 1.0. It has since been improved for better decision-making as well as cognition.
The healthcare AI assistant is already being tested in several healthcare facilities across Beijing in seven different hospitals, and the government already has plans to test the AI tool in other cities in the coming months.
Peking Union Medical College Hospital's Chief Neurosurgery Physician, Feng Ming said that the technology still comes with obstacles, which include restricted computing power due to the banned access to Nvidia's advanced chips.
"However, we can develop a vertical model with our own characteristics with more high-quality data from top hospitals in the mainland, which is not available for OpenAI and many domestic private companies," Dr. Ming continued.
AI in Healthcare Still Comes with Risks
Even though AI applications in the healthcare sector look promising, the technology is still not developed to the point that the potential risks are no longer significant. University of Oxford researchers believe that there are still security issues that may arise.
As the AI helps lessen the medical staff's workload, certain private patient data will also be fed to the AI model that's being used. Oxford's Institute for Ethics in AI research fellow Dr. Caroline Green points out how this can also be a weakness.
If any type of personal data is put into a generative AI chatbot, it could be used to train the language model, said Dr. Green. As mentioned in The Guardian, that data could then be generated and revealed to other users. All they need is the right prompt.
Social care worker Mark Topps mentioned that until regulators release guidance, "a lot of organizations won't do anything because of the backlash if they get it wrong. In a meeting, 30 social care organizations discussed the responsible use of AI and intend to create a good practice guide within six months.