Duke AI Health: Making AI Work Better for All

Duke AI Health: Making AI Work Better for All

Using artificial intelligence in health care is a bit like mining for gold on a mine field, according to Michael Pencina, PhD, a professor in the Department of Biostatistics and Bioinformatics at Duke's School of Medicine.

“The opportunity is there, but there is a lot of potential to do things the wrong way,” he says.

As director of Duke AI Health, Pencina and his team are working to clear the mines so people can get to the gold.

Computer programs can use artificial intelligence (AI) to analyze enormous amounts of data and quickly identify items requiring action, whether it's an MRI showing a possible tumor or a patient at risk of developing a complication. The techniques behind these AI algorithms, including machine learning and deep learning, are exciting and powerful. Yet what Pencina calls “fancy math” can obscure how the tool is producing its results and whether those results are trustworthy.

To address that problem, Duke AI Health is creating and disseminating techniques to evaluate AI tools to make sure they work as intended.

“There are tons of people developing AI algorithms,” says Pencina, an applied mathematician and biostatistician. “My mission is clear: I want to focus on evaluation.”

It's urgent work because as the use of AI in health care is growing, unintended consequences are popping up.

In one case, a commercially-developed AI tool designed to help physicians identify hospital patients with sepsis performed poorly when deployed in some hospitals across the country. A possible explanation stems from the fact that AI computer programs “learn” how to do a task by training on a provided dataset. If the training dataset consists of, say, health data from young people in San Francisco, the algorithm might not work well in older people in a small town in South Dakota.

In another case, an AI tool designed to identify patients who could benefit from more proactive and preventative care failed to identify a large proportion of Black patients who fit that description. That's because the algorithm used health care costs per patient to represent severity of illness. But racial disparities in U.S. health care, which occurs in a broader context of inequality, result from less access to quality healthcare, be that of inadequate health insurance coverage or other systemic barriers such as racism, bias, and discrimination. This means, on average, that a Black patient is likely to have fewer interactions with the health system and less access to medical procedures than a White patient with the same level of illness.

While the Duke AI Health team looks to identify and prevent all the ways that AI algorithms can go awry, bias is the particular focus of the inaugural Duke AI Health Equity Scholar, Michael Cary, PhD, RN, FAAN, the Elizabeth C. Clipp Term Chair of Nursing at the Duke School of Nursing.

“I've been tasked specifically with developing criteria and standards for reducing bias and unfair outcomes as a result of algorithms,” he says. He’s approaching that in several ways, including heading up a research team that is learning ways to root out bias in clinical algorithms. He hopes to apply the results at Duke and share the information with other health care systems. “I worry some smaller health care systems, perhaps those with fewer resources and less infrastructure such as those in rural areas may be using AI-based tools, having no idea they are discriminatory and not having any idea how to fix that problem,” he says.

Cary is uniquely suited for the role, with clinical experience in nursing, research experience in health services and applied data science, and lived experience as a Black man. “I know what it feels like to be marginalized and discriminated against,” he says. “I know what it's like to grow up in a low-resource area and have to drive an hour to get to a hospital.”

His sense of purpose was strengthened in August 2022, when the federal government issued a rule stating that health systems could be held liable in the future if they develop and use clinical algorithms that produce discriminatory results.

“That made it all the more real to me that this is a problem of the utmost importance,” he says. “But I feel a sense of assurance that as a health system and as leaders, we're already marching forward. This is an opportunity for us to work with the government in refining that rule and providing guidance for other health systems.”

Duke AI Health is already doing just that as a founding member of the national organization, Coalition for Health AI, which develops and disseminates guidelines and best practices for AI.

Duke is also a role model for governing, tracking and evaluating clinical algorithms in-house, through Algorithm-Based Clinical Decision Support (ABCDS) Oversight. “Duke is one of the first academic healthcare systems in the nation that has put this kind of oversight framework in place,” says Nicoleta J. Economou, PhD, who directs ABCDS Oversight at Duke.

Through this algorithmic oversight process, all clinical algorithms to be used or in use at Duke Health, including those that use AI and those that don't, are evaluated to ensure they are effective and safe, easy to use, fair and equitable, and that they comply with relevant regulations. Ongoing monitoring of algorithms catches problems that can crop up if, for example, the patient population changes.

“By implementing the ABCDS Oversight framework in the past year and half, we're impacting how we are delivering patient care, and we've observed fairness and equity benefits,” Economou says. “Ethical AI and equitable AI is something we're not only talking about, but also acting upon here at Duke.”