Healthcare AI: Australian Medical Association Calls for Stronger AI Regulations After Doctors Found Using ChatGPT

The Australian Medical Association (AMA) is calling for stronger regulations and transparency around the use of artificial intelligence (AI) in the healthcare industry after doctors in Perth were ordered to cease using ChatGPT to write patient medical notes.

In a submission (pdf) to the federal government on July 25, the AMA said that Australia is behind countries, such as Canada, the United Kingdom, and the European Union, in AI regulations and that rules need to be put in place to protect the privacy and safety of patients and healthcare professionals.

“We need to address the AI regulation gap in Australia, but especially in healthcare where there is the potential for patient injury from system errors, systemic bias embedded in algorithms and increased risk to patient privacy,” AMA President Professor Steve Robson said in a statement on July 27.

“AI is a rapidly evolving field with varying degrees of understanding among clinicians, other health care professionals, administrators, consumers and the wider community.”

In May, some staff at Perth’s South Metropolitan Health Service (SMHS), which spans five hospitals, were reportedly using an AI Chatbot, ChatGPT, to write medical notes that were then uploaded to the patient record system, according to the Australian Broadcasting Corporation (ABC).

"Crucially, at this stage, there is no assurance of patient confidentiality when using AI bot technology, such as ChatGPT, nor do we fully understand the security risks," according to an email obtained by the ABC that was sent to staff by SMHS’s chief executive, Paul Forden.

"For this reason, the use of AI technology, including ChatGPT, for work-related activity that includes any patient or potentially sensitive health service information must cease immediately."

The Epoch Times has confirmed the authenticity of the email with the SMHS.

In a statement, Paul Forden, Chief Executive of the South Metropolitan Health Service, said the email sent to all staff in May was to remind staff of the importance of data integrity and patient confidentiality as a precautionary measure.

“This was in response to one doctor being found to have used artificial intelligence (AI) bot technology to generate a patient discharge summary. There are no grounds to believe there has been any breach in anyone’s individual identifiable patient confidential information. The information put into the AI program did not include patient identifiable information," Mr. Forden said.

“While we recognise and value the use of AI technology in health, this must be done in a coordinated, considered, and approved manner, to ensure the safety and security of staff, patients, and our services.

“The South Metropolitan Health Service prides itself on being a health service that champions new technologies and innovation, but it is essential this is done in a safe and considered way.”

Biases in AI Algorithms

Moreover, the AMA said that biases in AI algorithms can result in worse patient outcomes.

For instance, a study on the use of pulse oximeters, a device that determines how much oxygen is present in a person’s bloodstream, found that people with darker skin had an overestimation of oxygen levels, which was due to the exclusion of diverse groups of patients in the original clinical trials. The AMA said that such a scenario would put people with darker skin at a higher risk of hypoxia (low oxygen) and that similar challenges would be met with AI applications in healthcare.

“Therefore, the AMA argues that to avoid similar challenges with AI applications in healthcare, adequate regulation and regulatory protections must be inclusive and representative. We contend that the application of AI in healthcare must be relevant to the target population,” the AMA said in its submission to the Department of Industry, Science and Resources Discussion Paper.

Other industries that have concerns over AI biases include the employment and financial sectors.

According to Reuters, biased AI-driven hiring algorithms may “inadvertently skew” towards certain profiles and demographics and could potentially deny essential financial services to certain demographics that are grounded on baseless assumptions.

“It will take a concerted effort, collaboration across sectors, and constant vigilance. If left unchecked, AI bias has the potential to trigger significant societal ramifications, such as engendering less diverse workforces, increasing incarceration rates, or exacerbating income and wealth disparities. However, by implementing the right measures, we can successfully navigate this unknown terrain,” according to Reuters.

Final Decision Must be Made by a ‘Human,’ AMA Says

The AMA said Australia should consider the proposed EU Artificial Intelligence Act—defining levels of risk with AI, such as robot surgery, which would be high risk—and Canada’s legislative requirement of human intervention during the decision-making process.

“Future regulation should ensure that clinical decisions that are influenced by AI are made with specified human intervention points during the decision-making process,” the AMA said.

“The final decision must always be made by a human, and this decision must be a meaningful decision, not merely a tick box exercise.

“The regulation should make clear that the ultimate decision on patient care should always be made by a human, usually a medical practitioner.”

The AMA said that such regulations would establish responsibility and accountability for any errors in medical diagnosis and treatment.

“In the absence of regulation, compensation for patients who have been misdiagnosed or mistreated by application of AI technologies will be impossible to achieve,” according to the AMA.

The AMA said that principles embedded in legislation around the use of AI should ensure the following: safety and quality of patient care; patient data privacy; medical ethics; equity of access and equity of outcomes through the elimination of bias; transparency in how algorithms are used by AI; and that the final decision on treatment is made by the medical professional.

Reposted from: https://www.theepochtimes.com/world/peak-body-calls-for-stronger-ai-regulations-after-doctors-found-using-chatgpt-5428733

Comments

Labels

Show more

Popular posts from this blog

10 Best Natural Ozempic Alternatives 2024

10 Best Vitamin C Serums Recommended by Dermatologists 2024

9 Best Vitamin C Serums for Brighter Skin 2024

Can Diet and Lifestyle influence your Risk of getting Cancer? Let the Science Speak (2024)

7 Best Vitamin C Serums for Hyperpigmentation 2024

10 Best Cosmeceutical Ingredients of 2024

Linoleic Acid vs Linolenic Acid: What's the Difference?

Linoleic Acid vs Oleic Acid: What's the Difference?

10 Best Nasal Sprays for COVID-19 (2024)

7 Best Cetylpyridinium Chloride Mouthwash Brands 2023

Archive

Show more