Skip to main content

Scalpels And Artificial Intelligence: Health Care Providers Should Learn Both

Panelists examine difficult moral and technological terrain of using algorithms in health care

Data Portability, Human Oversight May Be Key
Dr. Erich S. Huang speaks at a panel discussion on using artificial intelligence in health care.

Patterns in health data that get analyzed by A.I. algorithms do not confirm objective truth, Duke experts agreed during a recent briefing on artificial intelligence and health care.

Assembled to negotiate the difficult moral and technological terrain of using algorithms in health care, the half-day briefing last month in Washington, D.C., offered congressional staff and federal employees a chance to ask hard questions in a private setting.

Leading the conversation were Dr. Erich S. Huang, co-director of Duke Forge, the university’s center for health data science; Nita A. Farahany, professor of law and philosophy; and Arti K. Rai, Elvin R. Latty Professor of Law.

Huang said without the ability to move health care data uniformly across platforms, firms and government departments – known as data portability principles -- A.I. innovations might become siloed, making them difficult to test and to share in their benefits.

Huang argued this portability challenge was a ripe area for federal oversight, going on to note that data portability principles that encourage verifiability might help resolve this tendency.

Huang added that even the Google brain project’s 92% brain cancer recognition rating is not a perfect program. An A.I. can have a robust diagnostic accuracy rating and still find false positives in the absence of human reasoning.

For example, if an A.I. offers a 95% chance a brain tumor exists in an MRI scan, a human radiologist could easily see that the “tumor” is actually a blemish on the screen or a bug that landed inside the MRI.

Panelists compared the code in a deep learning system to a scalpel, arguing that learning to work with one is just as important as the other and that it is incumbent on medical schools to teach how to use both.

The conversation shifted to the potential for malice and mistake, noting that health care providers ought to have a responsibility to learn the basics of an A.I. system. In the same vein, medical device companies need to clearly articulate the required information and training required to use their devices, argued the panelists.

Not all doctors must be experts in computer science, but they need to understand its basics. By doing so, they can better explain to their patients the limits and potentials of any diagnostic technology in use.