Viewpoint: Potential Misuses of AI in Healthcare: Implications for Physicians (Part 2)
“I think that if you work as a radiologist you are like Wile E. Coyote …You’re already over the edge of the cliff, but you haven’t looked down yet.” – Geoffrey Hinton – father of neural networks on how Radiology will be affected by Artificial Intelligence
Nuclear power can be used to fuel our cities. It can also be used to destroy them. AI promises to change the world, but it could also wipe out humanity. Both technologies are value neutral. They can do great good or do unimaginable harm. It all depends on who is using it, and how it will be used.
The advent of Artificial Intelligence (AI) applications in healthcare ranges from diagnostic tools and predictive analytics to decision support systems. AI in healthcare is advancing so quickly, the New England Journal of Medicine is planning a new journal, NEJM AI. While AI promises benefits in the provision of healthcare, we should be very skeptical about who is making the rules and how they will be used. Healthcare stakeholders including hospitals, insurance companies and government have a history of misdirecting and misusing technology. If “what is past is prologue,” these potential misuses could detrimentally affect physicians and patients.
Electronic Health Records (EHRs)
Like AI, the promise of Electronic Health Records (EHRs) was welcomed with great anticipation for the potential to revolutionize healthcare. EHRs were promised to do a lot: make medicine safer, bring higher-quality care, empower patients and even save money. However, the reality has been starkly different. Physicians have become enslaved to a system that prioritizes data collection over patient care, leading to widespread dissatisfaction. Physicians spend two hours on EHRs and desk work for every hour of direct patient care. Due to pressure from angry physician users, hospitals came up with a medieval solution: hiring tens of thousands of scribes to follow physicians around and enter their notes and orders into the EHR. Only in healthcare does “automate” mean adding staff and costs! AI has promised to eliminate the jobs of all these scribes.
An investigation by Kaiser Health News and Fortune magazine has found “A decade ago, the U.S. government claimed that eliminating paper medical charts for electronic records would make health care better, safer and cheaper. Ten years and $36 billion later, the digital revolution has gone awry.” EHRs were supposed to be a tool for physicians to improve care and make their work lives easier; that didn’t happen.
Will the rise of Artificial Intelligence (AI) in healthcare replace much of the exclusive role of the physician? If a doctor’s tasks can be redefined to standardized questions and responses, hospitals might decide that a lower-paid employee, such as a nurse practitioner or physician assistant, will play that role, with their decisions guided by AI. Whether or not this happens will depend on whether a doctors’ tasks can be redefined to standardized questions and responses.
AI Misuse in Hospitals:
It is our duty to ensure that we’re using AI as another tool at our disposal — not the other way around.” — Dhruv Khullar, MD, a physician at New York-Presbyterian Hospital
In a recent survey of senior healthcare executives, hospitals are enthusiastically adopting AI to reduce costs, replace data entry and customer support jobs, scribes and ostensibly enhance patient outcomes. However, these technologies may be misused in ways that adversely impact physicians and patients. One such misuse is the overreliance on AI for diagnosis and treatment decisions, leading to a devaluation of physicians’ skills and expertise. Physicians may find themselves pressured by hospitals and patients to accept AI diagnoses while remaining legally liable. The collection and use of vast amounts of patient data by AI systems raises significant privacy concerns. Physicians might be held responsible for privacy breaches as well, further tarnishing their reputation and reducing trust among patients.
Who gets the profits? According to research by David Dranove, Kellogg School of Management and Walter J. McNerney Professor of Health Industry Management; “although doctors may become more productive using AI, they won’t necessarily reap financial benefits. Instead, the healthcare system is more likely to capture the additional value through higher profits.”
AI Misuse by Insurance Companies:
Insurance companies have a vested interest in using AI to analyze patient data, forecast health outcomes and use AI algorithms to deny health insurance claims in bulk. AI systems are “black boxes,” making it difficult to understand how insurance companies are making decisions. This lack of transparency can be a problem when AI is used to determine important factors like policy rates or claim approvals.
Insurance companies could also leverage AI to influence treatment decisions, encouraging physicians to choose cost-effective treatments over potentially more effective but expensive alternatives. This interference could compromise the autonomy of physicians and their ability to provide the best possible care for their patients. With the goal of using AI to collect and aggregate patient data, insurance companies will have access to exponentially more private patient information. Access to this data could create privacy issues, especially if the data is used without proper consent or security measures. This could also increase the potential for cyber threats, which can lead to data breaches and violations of patient privacy.
AI Misuse by Government:
While AI can assist government in managing public health effectively, it also presents potential avenues for misuse. Governments could use AI to conduct extensive surveillance under the guise of public health, infringing on individual privacy (Nature Medicine, 2022). This scenario could place physicians in a difficult position, as they may be required to share sensitive patient information. Government-driven AI systems might standardize healthcare delivery in a way that undermines physicians’ clinical judgment. David Dranove, PhD stated in Will AI Eventually Replace Doctors?, “If AI algorithms become the primary determinant of treatment plans, physicians could be reduced to mere implementers of decisions made by AI, thus devaluing their clinical expertise and experience.”
The greatest risk of AI in healthcare is not in its use, but in its misuse. It should be a tool, not a decision-maker.
While AI has the potential to transform healthcare, it is imperative to consider and address potential misuses. Physicians could face severe challenges, from ethical dilemmas to threats to their professional autonomy and devaluation of their expertise. Considering our recent experience with EHRs, there’s a valid concern that AI will become yet another tool co-opted by hospitals, insurance companies and the government to enforce standardization, measure performance, reduce costs and ensure compliance. An excessive dependence on AI could devalue physicians’ expertise and intuition, relegating them to merely recording data and following AI-driven decisions.
This could potentially further disrupt the physician-patient relationship and lead to a dehumanization of care.
In part 3 of 3 in this series, I will discuss choices physicians and patients can make that may insulate them from becoming dehumanized by AI and preserve the doctor-patient relationship.