In the world of medical imaging, the promise of artificial intelligence (AI) has ranged from the apprehensive (“AI is going to replace the radiologist!”) to the incredulous (“There’s no way we’ll see AI in our practice any time soon!”) so frequently that it can be difficult to know what’s real and what’s imagined. In the “hype cycle” of the technology, a graphic representation of the maturity of emergent technology created by the market research firm Gartner, is AI pushing towards the Peak of Inflated Expectations? The Trough of Disillusionment? The Plateau of Productivity?
Peter Shen, vice president of innovation and digital business at Siemens Healthineers North America, acknowledges that “there is certainly a bit of expectation or promise around artificial intelligence” in medical imaging, “if for no other reason than it’s been a buzzword for the past several years.” Amid growing expectations that AI-driven technologies will deliver on promises to increase workflow efficiencies or enable better clinical diagnoses, Shen said what medical imaging is struggling to realize “industry-wide, and clinically,” is the practical reality of how to incorporate AI into daily clinical routines.
For a start, there’s the challenge of financial motivation: Shen describes hesitance among providers and clinicians to invest in (or ask for) AI-driven devices and systems without knowing what the return on that investment might look like, or without knowing whether any financial incentive to implement it may exist among health care providers or the federal government.
“The clinician isn’t necessarily paying out of his or her pocket for the AI solution to be implemented,” Shen said. “It’s up to the imaging department to have to procure these kinds of solutions, and a lot of departments really struggle with making the investment into AI because the quality metrics or throughput expectations around AI aren’t really there yet. A lot of the whitepapers around AI are focused on the clinical competence around AI, and the operational competence has yet to be addressed: any sort of reimbursement or payment done around these studies is very far and few between. Institutions won’t make an investment in AI unless they can determine how they’ll recoup it.”
Another key consideration for Shen is how providers will integrate AI technologies into their overall daily clinical workflow. Some initial forays into AI have focused on enhancing image detection and the identification of malignancies within a study. Determining what to do with this information reflects the next promise of AI, which is powering patient screenings to further help radiologists make accurate diagnoses.
“What clinicians desire now is the ability to have those algorithms run in the background before those studies are presented to them,” Shen said, “and then expanding that AI to prioritize which cases need the most immediate attention. In the ideal world, the AI has gone through the exams, identified the patients who have something urgent to look at, and those are floated to the top of those cases to review. We’ve heard from several different providers that AI is being used right now for a lot of diagnostic work, but that in the screening environment, it would be great to be able to identify or triage certain patients who might then need some sort of follow-up exam because something’s been detected. There’s a strong desire to move AI beyond this diagnostic tool into screening or triaging a patient and rule out some things.”
AI is also the hook upon which many have hung their expectations that the technology will help to close a growing gap between the number of cases that must be interpreted and the availability of radiologists to interpret them. The more wide that gap broadens, the less time available for each clinician to dedicate to investigating any single case, no matter how severe. As Shen puts it, “If radiologists have 100 exams today, but 200 tomorrow, and the same amount of time to review them, they might only have half as much time to interpret each exam.” If AI can help cut down on some of the time they spend preparing an exam for interpretation, but make that time spent more meaningful to the outcome of that effort, the technology will be a welcome addition to their practices.
Some of the ways in which that prep time could be shortened involves the advanced processing capability of an AI-powered workflow system, Shen said; some of it involves automating “the handshake from image acquisition from CT or MR to the way that the individual radiologist wants to be able to visualize the relevant AI findings of that exam” for the best interpretation of the results.
“Algorithms can help automatically do the identification, characterization and measurements associated with the abnormality found within an image,” Shen said.
One of the reasons Shen believes the medical imaging industry hasn’t fully embraced AI is that in order to integrate its automated processes into radiology workflows, radiologists must change the way they’ve done their work for years already. The expectation that AI will save time and effort must be tempered by the learning curve involved in adapting to those changes, especially when it comes to the individual approaches each clinician takes to his or her diagnostic processes.
“Now that we have these solutions that can help us with these tasks, the additional information being generated is something the radiologist has to consider,” Shen said. “That’s a little bit challenging. Radiologists know that their role in terms of the patient’s care and clinical journey has to evolve as well. Part of that means that they have to become a more informed clinician to be able to make a more informed diagnosis for the patient.”
The ability to leverage AI to present a larger, clearer picture of patients and their medical conditions represents a function that could unlock some of the biggest promises of the technology – namely, its ability to retrieve personal medical information beyond what’s presented in the imaging study itself and inclusive of electronic health records, lab results; even genomic and genetic data. Shen argues that if radiologists had access to all that information alongside patient images, they could return more comprehensive and accurate diagnoses. There’s even consideration given to leveraging AI-informed patient profiles to create “digital twins” of patients that could be used to simulate different data points and make diagnostic or therapeutic decisions that can be tested virtually to predict the response in the patient.
But in order for any of those abstractions to be realized, they must build confidence that the technology will reach the right clinical conclusions, Shen said. This involves careful consideration of the data sets upon which the algorithmic knowledge is built.
“It’s one thing to feed those AI algorithms with a million different images that show a big abnormality in the chest, and then that algorithm can easily identify that abnormality because we’ve trained it a million times,” Shen said. “But the challenge is to also consider training the algorithm with the curated results or reports or clinical background behind every one of these images.”
“What was the outcome of that nodule?” he said. “Was it malignant? Was it benign? How did it change over time? There’s additional data around the complete history of the image, so that as the algorithm starts to learn and draw its own conclusions, there are more data points from which to make a more informed decision.”
“As you feed and train these different algorithms, they can’t come from the same patient cohort; they have to come from a diverse cohort of patients with all sorts of clinical backgrounds, whether they be genetic or whatever the case,” Shen said. “It’s got to have a diverse data set to create this clinical algorithm that you’ll have confidence in. As we’re starting to embrace AI, in order for everybody to feel comfortable, we need to make sure the algorithms are trained with as much complete information about the patient as possible.”
On that front, “We have a lot of work to do,” said Kris Kandarpa, director of research sciences and strategic directions and acting director of the division of applied science and technology at the National Institute of Biomedical Imaging and Bioengineering at the National Institutes of Health.
Although the U.S. Food and Drug Administration has begun to certify certain AI-powered medical imaging algorithms for the detection of certain diseases, so far only 21 of those have been approved in the radiology space.
According to a September 2020 study by Stan Benjamens, Pranavsingh Dhunnoo and Bertalan Mesko published in the Nature partner journal Digital Medicine, six of those algorithms are rooted in oncology, two are focused on brain imaging, and six work to improve image processing by reducing radiation dosage and “noise” on images. Four are focused on acute care, and two others on cardiovascular assessments.
In addition to the limited number of conditions and circumstances in which AI-powered technologies are being applied to clinical diagnoses, Kandarpa also points out that “the data sets that these algorithms train on are very small, and not very diverse;” hence the work yet to be done to enhance them.
“These algorithms that are coming out have to be generalizable,” Kandarpa said. “They have to be useful regardless of where they are employed. Newer algorithms should help solve problems in ways that a human might not do; not only interpreting individual images and predicting population health trends but also supporting workflow and work efficiency.”
During the novel coronavirus (COVID-19) pandemic, researchers also worked to apply AI-powered techniques to the diagnosis and management of COVID-19 patients. At the University of Chicago, NIBIB was joined by the American College of Radiology (ACR), the Radiological Society of North America (RSNA) and the American Association of Physicists in Medicine (AAPM) in establishing the Medical Imaging and Data Resource Center (MIDRC). MIDRC hoped to leverage AI processes to deliver faster, more reliable diagnoses of the impact of COVID-19 on patients by drawing upon what physicians knew about the variety of ways in which it affected patients.
“It’s more than a chest problem,” Kandarpa said. “We know now that many organs are being affected, and we will have, in the future, a way of monitoring COVID and chronic COVID in the long-haulers. Since COVID unfortunately affects many organs, MIDRC is also developing a knowledge base for all organs that should be applicable to other diseases that may also affect those very organs.”
Maryellen Giger, a principal investigator at MIDRC (midrc.org) and the A.N. Pritzker Distinguished Service Professor of Radiology, Committee on Medical Physics, and the College at the University of Chicago, said it is important to note that identifying COVID-19 data from imaging studies is just the beginning of what the MIDRC collaboration could deliver. It’s about properly collecting and curating medical imaging data sets to drive robust, unbiased AI development, harmonizing diverse image presentations provided by different institutions and standardizing their output for clinical end users.
Furthermore, MIDRC is also compiling sequestered data sets that can be used to independently test machine learning algorithms in the future by showing them images they’ve never seen before. Extremely important, Giger notes, is that MIDRC is co-led by three, major medical imaging entities (ACR, RSNA and AAPM), benefitting from their combined expertise in both the clinical and technical aspects of the medical imaging field.
“What MIDRC is achieving actually could affect all of medical imaging,” Giger said. “We have radiologists and imaging scientists from universities, community practices, government labs and FDA on various working groups. We have technology development projects that create the infrastructure, but we also have collaborative research projects.”
“Having everyone work together in MIDRC – RSNA, ACR, AAPM – it’s never been done before,” she said. “With these societies working together, we span medical imaging expertise, and we span the country.”
To make the most of any gains in medical imaging, device manufacturers don’t only need to collaborate on the development of AI-based technologies, but they must also work to certify that the AI output can be obtained across a variety of competing, proprietary systems to yield usable, unbiased decision-making tools.
“That is, if the software works, does it work correctly in a Siemens system, in a GE system?” Giger said. “Usually, the hold-ups aren’t the technical expertise, but rather the policies, the data-use agreements, and the willingness to share data. Having institutions across populations contribute their imaging data to MIDRC is crucial in order to develop and demonstrate the robustness of algorithms across different platforms and populations, and that’s our aim with MIDRC.”
Charting a course for that same, broad integration of machine-learning technologies, clinical analysis strategies, and of various medical imaging leadership groups is really what will advance the adoption of AI in medical imaging, and Giger believes that’s a job that radiologists are distinctly positioned to do.
“Radiologists are going to be integrators of knowledge,” she said. “They have to integrate the clinical history, the image, and other factors, and then give a recommendation for patient management. Part of that will include integration of AI and non-AI sources.”
Or, more succinctly, as Giger said, the common wisdom that “AI will replace radiologists” has evolved: “AI won’t replace the radiologist, but the radiologist using AI will replace the radiologist not using it.”