Why health care AI can’t replace medicine’s human component


Why health care AI can’t replace medicine’s human component

The AMA deliberately uses the term augmented intelligence (AI)—rather than the more common term “artificial intelligence”—when referring to machine-learning computer algorithms that hold the potential to produce dramatic breakthroughs for health care research, population health risk-stratification and diagnostic support.

And there’s a good reason for that.

“In health care, machines are not acting alone but rather in concert and in careful guidance with humans, i.e., us—physicians,” said AMA Board of Trustees Chair Jesse M. Ehrenfeld, MD, MPH. “There is and will continue to be a human component to medicine, which cannot be replaced. AI is best optimized when it is designed to leverage human intelligence.”

Dr. Ehrenfeld, an anesthesiologist and biomedical informaticist, hosted a program at the AMA State Advocacy Summit that covered what a recent National Academy of Medicine (NAM) report has dubbed “the hope, hype, promise and peril of AI.”

Dr. Ehrenfeld told the hundreds of physicians and state medical society staffers attending the Bonita Springs, Florida, event that recent history holds lessons on approaching the future challenges related to health care AI.

“Over the past decade, we have all learned about how we can incorporate new innovations into clinical practice,” Dr. Ehrenfeld said in his presentation. “Genetics, genomics, the electronic health record and digital medicine have all raised similar policy issues around innovation, incentive payments, regulation, liability, sufficiency of infrastructure, training and professional development.”

The chief lesson to be taken from these experiences is to settle these policy issues before expecting physicians to fit the technology into their workflow. Otherwise, the disruption will be chaotic to practices and adoption challenging, he said.

Citing the current focus of research and investment dollars, Dr. Ehrenfeld said the key areas of AI growth will be in diagnostic tools and health care administration.

Related Coverage

What we learned about telemedicine and health care AI this year

Humanistic care and machine learning

AMA policy supports the development of AI systems that advance health equity and the quadruple aim—meaning it should enhance patient experience and outcomes, improve population health, reduce overall costs while increasing value, and support professional satisfaction of physicians and the health care team. 

Sonoo Thadaney Israni, the first panelist to speak, agreed. Israni is executive director of Stanford University School of Medicine’s Presence interdisciplinary center to promote the art and science of human connection in medicine, and co-wrote a recent JAMA Viewpoint essay summarizing the NAM report. She also was a member of the NAM panel on health care AI.

Rather than a review of the “rote medical history” clinicians use to prepare for a patient visit, Israni envisions AI-powered EHRs providing medical teams with graphics and animation that make it “possible for physicians to picture precisely where this patient is in his or her life.”

Health care needs to avoid the typical “garbage-in, garbage-out” scenario, she said in her presentation, noting that errors go up when AI relies on data that is unrepresentative of the population being cared for.

The NAM report cites anti-hunger efforts that sought to optimize production of total calories per acre rather than optimize nutrition—leading to diets of empty calories. Israni warned against AI that leads to “empty health care.”

Similarly, Michael Abramoff, MD, PhD, founder and executive chair of IDx Technologies, warned of “glamour AI,” which is technology that is exciting, but does not improve patient outcomes.

“We don’t want that,” Dr. Abramoff said during his presentation. “We shouldn’t pay for it.”

Learn more about Dr. Abramoff’s pathbreaking health care AI work at IDx.

Dr. Abramoff, a professor of ophthalmology at the University of Iowa Carver College of Medicine, developed IDx-DR, a device for primary care physicians that uses an artificial intelligence algorithm to detect diabetic retinopathy in at-risk patients.

It is the first AI application cleared by the Food and Drug Administration to make an autonomous, real-time, point-of-care diagnosis of diabetic retinopathy without the need for specialist review.

Dr. Abramoff commended “the enormous effort going on in all parts of the AMA” related to AI, including policy on liability and incentives plus development of a Current Procedural Terminology (CPT®) code that takes effect next year for automated point-of-care retinal imaging.

Related Coverage

10 ways health care AI could transform primary care

AI creators hold legal liabilities

IDx is aligned with AMA policy that developers of autonomous AI systems with clinical applications must accept liability for issues arising directly from system failure or misdiagnosis.

“We see liability as essential,” Dr. Abramoff said. “As a doctor, you’re liable for your diagnosis, your medical decision. As a creator of AI, if you say: ‘It does what a specialist does,” you have to assume liability.”

The Federation of State Medical Boards (FSMB) is in the process of developing general AI guidelines, said the final presenter, Sarvam P. TerKonda, MD, an assistant professor of plastic surgery at the Mayo Clinic in Jacksonville, Florida, and a member of the FSMB Board of Directors.

Work is being done to clarify issues regarding liability and licensing, data security and privacy, and meeting public expectations for safe and reliable care, Dr. TerKonda said in his presentation.

“If we’re going to use an AI device, our patients expect that you know what that device is doing,” he said.

The perils of AI cited in the program included recent stories of aviation tragedies in which computers overruled pilots and of motorists who ignored their own eyes and drove into the ocean because they unwaveringly followed their computer navigation system’s instructions.

“What concerns me is—it’s pretty obvious when a poorly designed or inappropriately validated airplane crashes,” said Dr. Ehrenfeld. “But the potential for insidious bias or problems to be covertly distributed though the health care system” by dysfunctional, inappropriately validated AI algorithms “has the potential to harm millions of lives before anybody notices.”

Learn more about the AMA vision for AI in medicine. The AMA is committed to helping physicians harness AI in ways that safely and effectively improve patient care.


Source link