By Matt Skoufalos
Like many technology-based processes, much of the value of a medical imaging study is inevitably tied to the computing power behind it. That processing power aids image analysis, fusion, annotation, and retrieval, among other applications. The better the algorithm that handles that task, the more precise the results, and the more valuable the study to the health care professionals capturing and interpreting it. As those computational processes continue to evolve in complexity and variability, more sophisticated applications powered by deeper machine learning continue to emerge. But augmented or artificial intelligence systems, popularly known by the shorthand, “AI,” are finding broader applications beyond diagnostics. As the technology continues to develop, clinicians, technologists and device manufacturers are working to resolve challenges around workflow, decision support and the broader questions of how to do so securely, efficiently and ethically.
Anant Madabhushi directs the Center on Computational Imaging and Personalized Diagnostics at Case Western Reserve University. There, he works closely with radiologists, pathologists, surgeons, clinicians and cardiologists to explore medical imaging applications of machine learning beyond diagnostics. Chief among them is using the technology to identify disease prognoses and predict responses to treatment.
Madabhushi describes AI as delivering on “the promise of precision medicine.” Whereas earlier applications of machine learning to medical imaging had been focused on fine-tuning the quality of a study, this approach, called radiomics, turns on pattern recognition within massive data sets in order to identify previously hidden details about patients’ underlying disease biology.
“By the use of AI, we can start to tell about the pathways and mutations of a tumor from the analysis of a scan,” Madabhushi said. “You can start to identify patterns that are too subtle for a radiologist to discern. It’s a way to use algorithms to find biomarkers and features to tell us more about the appearance, biology and outcome of the disease.”
For example, AI-powered bulk analysis of prostate imaging can help the team at Case Western Reserve to identify patterns that help distinguish which cancers are likelier to require interventions versus those that can be managed by active surveillance. That intelligence helps physicians develop more effective treatment plans for their patients.
“There’s work looking at which patients really need aggressive therapy and which can avoid it,” Madabhushi said. “We can find patterns from MR and CT scans to tell us how these patients will do in the future.”
AI tools can help physicians learn not only when a lung nodule ought to be biopsied, but whether it’s likely to respond to radiation treatment, Madabhushi said. Benign nodules tend to be connected to a blood supply, whereas malignant nodules often display a twisted vasculature, which responds worse to chemotherapy and immunotherapy. Such subtle insights were gleaned through machine learning and analysis of the data captured from medical imaging studies.
“By capturing quantitative measurements, we can predict whether a tumor is malignant or benign,” Madabhushi said. “This is metaphorically and literally outside-the-box thinking” that provides a chance to dial in the appropriate treatment for some of the most vulnerable patients and a way to stave off unnecessary spending. It spares the bank accounts of those for whom expensive immunotherapy medicines won’t work, and the bodies of those for whom surgical interventions aren’t required.
“Seventy to eighty percent of early-stage breast cancer patients don’t need chemotherapy,” Madabhushi said. “They can avoid that, and at the same time, avoid all the deleterious effects. Just by analyzing images of the biopsies, you can find who will benefit from the chemo just as you can find those who should avoid it. By doing that, we’re also addressing this issue of financial toxicity.”
Pattern recognition software is contingent upon crunching immense sets of data, and curating that information securely and with respect for patient privacy is a specific part of the work. Researchers like Madabhushi are trying to connect the maximum number of clinicians and physicians on a global scale, not only to provide the greatest benefit of this information, but also to broaden the data sets they feed the machines, thereby sharpening their tools.
“One of the biggest impediments to AI is the issue of generalizability,” Madabhushi said. “Too often, we’ve heard about how an AI-trained at Hospital A doesn’t work at Hospital B because it’s learned patterns specific to one institution rather than general measurements. It’s important to validate these tools in a multi-institutional, multi-site kind of way; if not, they’re not going to work in the real world.”
Developing “the gold standard data set” will require getting some of the biggest institutions with some of the most competitive agendas to collaborate. Even when spurred on by agencies like the U.S. Food and Drug Administration, creating such a multi-institutional dataset is a significant challenge, albeit one that’s ultimately necessary for the advancement of the entire field of study. The way those big hitters – private corporations, public entities, teaching institutions – have begun inviting collaboration is by helping to build large, openly accessible datasets, and focusing their competitive instincts on challenge competitions that “pool together large cohorts of cases and let the best algorithm win,” Madabhushi said.
“Those efforts are difficult, but the community recognizes that this data set has to grow,” he said. “It’s creating a great deal of interest and incentivizing people to share their datasets; it’s bringing people together.”
Ge Wang, who directs the Biomedical Imaging Cluster in the Department of Biomedical Engineering at Rensselaer Polytechnical Institute in Troy, New York, said he’s optimistic for the future of radiomics. However, the underlying issues of interoperability and “interfaceability” (i.e., standardizing image formats) are the biggest and most immediate barriers to its penetration and expansion. Corporate attitudes toward proprietary information tend to treat raw data and formatting techniques as protected information; however, Wang argues that manufacturers should unlock these data in the deep learning era so that researchers can pursue what he calls “raw-diomics.” That’s the process of going from raw data to radiomic features while bypassing the intermediate images.
“Current radiomic researchers treat images as the input, but images are reconstructed from raw data with unavoidable information loss,” Wang said. “Hence, it is desirable to work directly on CT and MR raw data for radiomics. We believe if you go from raw data, you can get more information.”
“There’s a business way to intake these critical elements and synergize the data for hidden information,” Wang said. “You can use federal learning to utilize distributed datasets while keeping the company/hospital/patient secrets. I believe these things can be done eventually, and hopefully quickly.”
Dr. Steve Kearney, medical director for health care, life sciences and government at the Cary, North Carolina-based SAS Institute, views AI technology as an augmented way “to help get the providers what they need,” and cloud-based computing as providing the vehicle to deliver it in a globally distributed system.
“The challenge, as we look at these types of protocols, is finding the appropriate subjects and appropriate images,” Kearney said. “We can deliver thousands or hundreds of thousands of images in the cloud environment. To then understand and leverage those kinds of protocols across multiple entities; we see this as just another augmentation of that opportunity for efficiencies.”
“We want to then leverage these great tools to allow clinicians and researchers to define what the problem is and the true best protocol is with the least amount of challenges to the system,” he said.
Among the challenges of doing so, Kearney lists understanding the diversity of algorithm models, being able to replicate them and offering transparency into their mechanics. He sees SAS as a partner in that process to help standardize the content involved in their validation, interpretation and interoperability.
“We’ve been asked to consult with all the regulatory groups, whether it’s at study sites or the universities and manufacturers themselves, to see about getting their systems in place to have multiple protocols across multiple devices,” he said. “We get a huge number of requests for assistance for AI and machine learning, and it’s one of our largest areas of investment to make sure people understand what these algorithms mean; how it impacts their patients is what we’re working toward.”
Greg Horne, global principal for health care at the SAS Institute, said that his work focuses mostly on how these technologies can be leveraged, not only to cut back on unnecessary interventions and therapies, but “how to help the radiologist do a better job.” Beyond image interpretation, he emphasized that AI can also help ease practitioner workloads “to get relevant exams back to the doctor” through more sophisticated digital records requisitions.
“We want to use AI to help pre-fetch exams based on the clinical conditions we have today,” Horne said. “If I broke my finger 10 years ago, and I’m going to have a CT of my chest, I only want to bring back medical history that might indicate why I’m having CT for my lung nodules. AI can bring in data that’s relevant to a clinical condition, and spread that to a faster diagnosis.”
Horne also believes AI-powered technologies can help improve revenues while easing radiology workloads. Cutting down on the number of unnecessary procedures does both, particularly as the Center for Medicare and Medicaid Services (CMS) more closely ties reimbursements to diagnostic assessments and the viability of treatment protocols. The same approach is being taken worldwide.
“We’re already seeing it in other countries as well,” Horne said, “why referral is made, whether the imaging requested is the appropriate imaging, and whether that will lead to a treatment outcome in connection with that imaging.”
“I firmly believe that AI has a big future in imaging and diagnostics, but at the moment, that is centered on the work that allows people to work more efficiently,” he said. “It’s important that we work to understand the change management process, and ensure that the ability to regulate within the algorithm world and the FDA and European agencies keeps pace while these things are changing.”
Horne said he’d like physicians to understand more about the value of AI as a component of clinical and diagnostic support rather than embracing the narrative that “AI is coming for your job.” Instead, he champions its value in supporting change management and as a counterforce against “physician burnout and clinical overload.”
Madabhushi agrees with Horne that bringing physicians around to broader adoption of AI-based tools involves helping them understand that “the algorithm is not going to be the decision-maker; it’s providing one piece of the puzzle.”
“I think the worst thing we can do is definitely think of AI as a great monster and eschew AI,” he said. “It’s part of the rubric that the physician is going to use to make the decision. AI using an MR scan is providing one part of the puzzle; the final piece is the radiologist.”
As AI is used to process and learn from broader medical imaging datasets, Madabhushi said complementarily evolving, distributed technologies like blockchain will also support decision-making, particularly around the security of health and medical information. Amid the questions of interoperability and interconnectedness that fuel machine learning, he wants decision-makers and technology developers to hold rational discussions about the ethics of AI at the intersection of health care, law and medicine.
“We’re starting to have deep conversations about these things,” Madabhushi said. “We’re using AI to tease out some of these differences and do the opposite of engendering a bias. We’re trying to use this as a vehicle to create better, consistent, more accurate and unbiased models.”