Could AI impact your next visit to the GP?
Posted on
Imagine this: you go to your GP, you sit down and get ready to talk about a medical issue that’s been bugging you for ages. But then they tell you that they’re using AI to record the conversation for their medical records. Wait, what?
Artificial intelligence is making its way into all facets of our society. Chat GPT, e-commerce and the development of self-driving cars are all things you’ve probably heard of recently – but should we be worried about the use of AI in medicine?
We spoke to MEJ Associate, Samuel Wolfhagen, who has been looking into the impact of AI technology on the medical field and the rights of patients injured by medical AI. We asked him how AI is already being used by our doctors, and how it might impact your next visit to the GP.
What is medical AI?
Artificial Intelligence (AI) tools are varied, and some have been around for a while.
AI can make decisions, learn, reason or adapt. Essentially, these are technologies with capabilities more commonly associated with intelligence or autonomy. Until now, we’ve only been used to seeing these characteristics in humans.
In healthcare, the AI tools that are currently emerging – and the most natural fit for the industry – appear to be machine learning algorithms or large language models. Machine learning algorithms take a large dataset, learn from it, and then use pattern-recognition to answer questions. In the case of the medical field, these algorithms might be used as a diagnostic tool, with the ability to interpret scan results.
Large language models, like ChatGPT, are being trialled for their ability to answer medical consumer questions, or to evaluate lists of symptoms and provide suggestions on the most likely causes for a patient’s symptoms. This takes ‘Dr Google’ to the next level.
Am I likely to encounter AI on my next visit to the GP’s office?
You may already be interacting with AI-powered healthcare without realising it, even through something as simple as automated appointment reminders. In terms of more advanced, modern AI, research indicates that it’s likely to be just around the corner.
Some Australian medical practitioners have started using advanced transcription software, powered by AI, to create records of appointments. The tool ‘listens’ to the appointment, generates a transcript of what was said by the patient and the doctor, before finally generating a summary clinical note depending on what the doctor asks it to summarise. This is called using a ‘digital scribe’.
A medical practitioner should tell you if they plan to use a digital scribe and seek your consent to do so. If you have concerns about how its data is to be used, stored and disposed of, you should ask the practitioner for clarification about how the tool they are planning to use manages your information and records.
Do we need to be worried? Will AI lead to mistakes?
I personally advocate for healthy suspicion and concern rather than worry or fear. After all, it is a matter of when, not if, the medical field starts using AI tools on a large scale. The more we can understand it, the more opportunity there will be to protect the rights of individuals as the technology is implemented.
The healthcare industry, and others, are excited about what medical AI tools can do for the affordability, efficiency and accuracy of healthcare. This is the driving force behind the development and adoption of these products. Those opportunities are real and worthy of consideration, but must be weighed against the risks.
Whilst the early data is promising, it is likely that we will see AI cause harm to patients, especially in the early stages. However, human doctors also make mistakes and cause injury, and these things have happened since humans first started providing medical care to each other.
Patients ought to be given all the relevant information, and there should be clear guidelines for the use of medical AI. What I don’t want to see are groups of vulnerable patients, especially those in rural or remote areas, being presented with a flawed or risky AI system as their only means of accessing medical screening and advice.
Are there rules in Australia to monitor the use of AI in medicine?
The Australian Government regulates the use of AI in the medical field through its primary medical regulator, the Therapeutic Goods Administration (TGA). The TGA will regulate AI medical technology when it is intended to be used as a ‘medical device’.
While I don’t think these measures are being put in place fast enough, it is encouraging that so many people in the industry are turning their minds to the implications of AI technology and are striving to obtain its benefits for medical consumers whilst managing the risks.
Should I be concerned about my privacy?
It is understandable that someone would be concerned about their data and privacy in this era of technological innovation. Private companies and governments have more access to individual data, and more of the data itself, than ever before. If you are concerned about how your medical data or information is being used, I would encourage you to confirm with your health care provider as to whether they are currently using or plan to use medical AI and make your preferences and concerns clear.
Patients in Australia, as consumers of health services, have a right to privacy. This includes the way that records about your healthcare are stored and used.
What do I do if I’m worried about treatment I’ve received that used AI?
It’s important that you ask your GP about the ways they are using AI. All discussions about medical treatment should involve the process of seeking your informed consent. This generally means having the benefits, risks and alternative options fully explained to you by your doctor. If you are an adult with legal capacity, you have the right to refuse consent to any medical treatment.
If you think AI may have led to a mistake or its use has resulted in a worse health outcome, then it’s important you seek legal advice as soon as possible.