How Allina doctors are deploying AI to double-check cancer screenings

Minneapolis health system contracts with California company to ensure accuracy, overcome bias in first-gen AI tools.

The Minnesota Star Tribune
April 10, 2025 at 11:30AM
Dr. Mathew So, an Allina Health radiologist, concurred with the assessment of an AI tool that his patient has a high probability of prostate cancer based on imaging scan results. Allina is an early adopter of AI for health care decision-making, using it to verify doctors' interpretations of imaging scans and cancer diagnoses. (Richard Tsong-Taatarii/The Minnesota Star Tribune)

Allina Health is using artificial intelligence to improve decision-making in cancer cases, doublechecking doctors’ reads of lung scans and guiding them on murky treatment decisions for prostate cancer.

The Minneapolis-based health care system reported success over the past year through a contract with Ferrum Health, which helped it select and validate an AI tool that discovered abnormal growths, or nodules, in five patients that had been missed when doctors reviewed their lung scans.

“Our radiologists are reading them,” said Dr. Badrinath Konety, president of the Allina Health Cancer Institute. “The good news is, they are right about 99.7% of the time. But that .3 to .4% in absolute numbers becomes a lot when you talk about screening 100,000 patients.”

AI has exploded in healthcare. The Food and Drug Administration has approved nearly 1,000 AI-enabled devices, which analyze patterns in medical records and data to help doctors make triage and treatment decisions, and relieve crowding in hospitals by predicting which patients can go home. Doctors also are using large language models such as Google’s Gemini to review complex cases and confirm diagnoses.

“We are going to very quickly get to a world where ... an AI model is going to have a better differential diagnosis than we do,” said Dr. Adam Rodman, a Harvard clinician and leading AI researcher, in a presentation Wednesday to the Minnesota Alliance for Patient Safety.

Tools that examine imaging scans have shown some of the most promising results. Several Allina hospitals are using an AI tool to provide real-time verifications of colonoscopies and compare polyps in patients’ digestive tracts with images of those that proved cancerous.

AI is similarly helping doctors evaluate MRI scans for prostate cancer, and to determine which cancers are aggressive and demand treatment and which are slow-growing and can be left alone, Konety said. “A tool like this allows for a better reading of the MRI and helps standardize that reading, because a lot of it is subjective with the radiologist and based on experience.”

Radiologist Mathew So reads over an AI-generated assessment of a patient's MRI scan. The report generated by AI technologies will assess the patient's probability of prostate cancer, as shown by the red area in the prostate gland. (Richard Tsong-Taatarii)

AI’s growth in healthcare has come with pains. The tools are only as good as the data on which they are based. Those based mostly on data from white patients might not give as effective advice to non-white patients and vice-versa, said Pelu Tran, Ferrum’s chief executive.

“Most AI models are trained on populations in Massachusetts, New York and California. And that unfortunately does not reflect the reality of populations in the Midwest or the South,” he said. “Those AI models end up being really, almost by definition, biased to the populations they’re trained on.”

States such as Colorado have tried to get ahead of the problem, requiring that developers take steps to avoid “algorithmic discrimination” in their AI-enabled healthcare tools. (Minnesota lawmakers have proposed banning the use of AI tools in certain insurance coverage decisions, or AI chatbots in the pursuit of overdue medical debts, but haven’t taken on clinical uses.)

Allina hired Ferrum as one solution. The California company selects AI tools and then studies them over time to make sure they are providing relevant guidance for Allina’s patients.

Most predictive AI tools don’t work as well in real-life patient populations as they did in clinical research, Tran said. Some perform significantly worse and need to be used cautiously, said Tran, who founded Ferrum after his uncle died from a lung cancer that hadn’t been detected early.

The University of Minnesota’s Dr. Andrew Olson is similarly putting AI healthcare tools to the test in partnership with Harvard’s Rodman and colleagues at Stanford and Virginia universities.

One of their first studies found a large language model outperformed doctors in taking a written set of facts about a patient and coming up with the correct diagnosis. Olson said the simulation was set up in a way that probably favored the AI tool, but it ultimately showed how the tool could support doctors in real-life medical scenarios by suggesting alternatives about patients’ diagnoses.

“Why do we get it wrong? So often, we just don’t think of it. It’s not that I didn’t know it. It’s just that it didn’t come up,” said Olson, who has developed training for U medical students on avoiding medical mistakes. “These tools are an important supplement for us to get out of our own brains.”

The exterior of Mayo Clinic in Rochester. (Ayrton Breckenridge/The Minnesota Star Tribune)

Mayo Clinic is hoping to improve the reliability of AI tools by creating the world’s largest distribution network of medical data. Its platform in the next year or two will link Mayo’s de-identified patient data with millions of records from other participating hospitals worldwide.

“The models will get better when we are using better data to train them on,” said Dr. Tufia Haddad, co-leader of a platform and digital innovation group at Mayo’s cancer center in Rochester.

Burnout is a rising problem among doctors, who are spending two hours on recordkeeping and administration for every one hour with patients, Haddad said. AI tools could help by synthesizing medical records and guiding doctors to the most relevant information about patients.

“We’re going to be flipping that ratio with these AI models,” she predicted.

Nobody has proposed replacing doctors with AI tools. Large language models have glitches, including occasionally “hallucinating” inaccurate information, Rodman said, but doctors should be comfortable using them in an era in which patients consult AI as well. Some FDA-approved AI tools are specifically for patients, such as the Apple watch feature that detects signs of sleep apnea.

“I have a patient who checks me on ChatGPT,” Rodman said.

The Minnesota Medical Association formed a task force this year to prepare doctors by addressing the potential bias of AI tools and clarifying the liability risks of using them.

Konety said radiologists at Allina were skeptical about AI doublechecking their evaluations of lung screens, but have gained comfort with the tool and the way it supports their decision-making.

“It actually covers your backside,” he said, “more than, you know, trying to second guess you.”

about the writer

about the writer

Jeremy Olson

Reporter

Jeremy Olson is a Pulitzer Prize-winning reporter covering health care for the Star Tribune. Trained in investigative and computer-assisted reporting, Olson has covered politics, social services, and family issues.

See Moreicon

More from Business

card image

You can’t control President Donald Trump, the stock markets or the economy. But you can take practical steps to navigate the turmoil, including reviewing your household budget.