By Matt Skoufalos
Since the earliest notions of advancing medical imaging with technologies powered by artificial intelligence (AI) and deep computing, vendors, practitioners and hospital decision-makers have all been working to drill down on what exactly the promise of AI is, at least in the near term. From the jump, some radiologists were apprehensive that the technology had the potential to usurp their careers; when that proved to be an overstatement of the case, their focus turned to how the technology might be leveraged in the short term.
Now, instead of seeing the ubiquity of high-end computing software automating the diagnostic imaging space, practitioners are waiting for the technology to deliver on some of the supposed generational leaps it was meant to promise. Today, AI-enhanced technologies are typically available mostly in vendor demonstration sites and at university-level, academic centers of excellence. Interpreting the present-day landscape for AI-based imaging technologies means marrying those visions of the future to an assessment of present-day needs, resources and opportunities for growth.
Physician Ryan King, who comes from a family of radiation oncologists, is currently completing a diagnostic radiology residency at Louisiana State University-New Orleans. King also holds a master’s degree in computational science and engineering from Harvard University, and is a board advisor to AI developer Vysioneer of Boston, Massachusetts, which just received FDA approval for its proprietary tumor auto-contouring software. He has also worked as a clinical innovation fellow in the Center for Clinical Data Science (CCDS), a joint venture of Massachusetts General and Brigham and Women’s Hospitals to develop and validate AI-powered medical technologies.
From King’s perspective, most of the end to which he’s seen AI applied in the radiation oncology space have involved finding ways to accelerate or automate quantitative work around tumor target volumes and radiation dosimetry. AI-powered processes can help avoid damaging healthy tissue in a patient undergoing radiation therapy by auto-contouring target areas into formats that can be uploaded into various vendor software platforms for treatment planning.
“The ultimate decision is still left up to the provider, but it’s a second set of eyes, a second set of hands to speed up treatment planning,” King said. “Is the lesion there? Is it not? Can you measure it accurately? These are the things that are very time-consuming from a radiology standpoint.”
Yet for as effective as the best algorithmic processes are at accelerating the quantitative work that goes into treatment planning, King also cautioned that bringing them to market is a lengthy, time-consuming process.
“It is real, it will make efficiency better; I don’t think it’s quite there yet,” he said. “There was this hope that it would be easy and you could put out 10 models a year, but that’s not really realistic. There’s a lot of ambiguity in imaging. It’s not just the picture, it’s the picture coupled to the clinical history of what’s going on. There’s gross pathology that nobody has incorporated into their clinical methodology.”
Despite initial hopes that AI technologies would be able to very easily support radiology applications, King said the process has been slowed by necessary work in clinical relevance and validation, to say nothing of the hurdles presented by the data siloing that keeps algorithms from as being broadly applicable as possible. Part of the reason he pursued a radiology specialty is to be able to help identify areas of medical imaging in which AI might be most applicable.
“The current paradigm is, you get an expert to annotate some images for you, plug them into an algorithm, and learn the features that contribute to that pathology, and that’s obviously very time-consuming,” King said. “Someone gets paid a lot of money to contour those things, and it’s extremely limited. That’s not what we do as radiologists.”
Although King believes diagnostic AI may be a scientific eventuality, he’s not optimistic that it will happen within the near term, if only because the work behind it will require the incorporation of genomics and individually tailored medicine “as opposed to a simple, imaged-based revolution of deep learning in radiology.”
“These things eventually will arrive, it’s just not going to be as straightforward as creating a tech app that can immediately change how we interact,” he said. “You’re going to need years of validation.”
Ron Muscosky, worldwide product line manager at Rochester, New York-based medical imaging vendor, Carestream, said that the efficacy of AI applications in radiology will be measured in whether they provide “actionable results” and “solve real problems.” Carestream is focused on products that deliver what Muscosky termed “imaging intelligence” and “workflow intelligence,” processes centered on improved image quality and workflow efficiency.
“We were looking to figure out what problems we should be solving,” he said. “We talked to a lot of our customers to find out, ‘What are your pain points?’ It happens that AI works well for some of these, and for others, it doesn’t.”
“I think AI is over-used from a terminology perspective,” Muscosky said. “AI is useful when you’re trying to solve a repetitive problem or a repetitive operation. You can train your models to be very accurate; when you have a lot of variation, it doesn’t work very well.”
Among the Carestream imaging intelligence technologies in use today is a bone suppression algorithm that suppresses the appearance of bone and enhances the visualization of soft tissue. Another is “Smart Noise Cancellation,” which uses deep learning technology to separate the noise from the signal and subtract that noise from the image.
“If a facility wants to lower their dose, it allows the facility to do so without a loss in image quality,” Muscosky said. “Smart Noise Cancellation is used every day, on every image that people are acquiring, and fits right in with current workflow.”
On the workflow intelligence side, Muscosky described the AI enhancements that comprise the Carestream “Smart Room,” a suite designed to support radiographer workflow needs by eliminating repetitive tasks. Assistive imaging technologies can be used to automate collimation to deliver a targeted dose of radiation that limits radiation exposure risks to healthy tissue. They can also be used to automate fine-tuning of the equipment position relative to patient height, while providing corrective feedback to the technologist on the patient pose and position. This can save technologists time, help provide consistent imaging, reduce retakes, and improve infection control in the event of imaging a contagious patient, such as during the novel coronavirus (COVID-19) pandemic.
“If the tech wanted to distance themselves from a contagious patient, Smart Room would allow them do so,” Muscosky said. “Or, for other technologists that prefer to spend more time with the patient, Smart Room would allow them to spend less time adjusting the equipment and more time with the patient.”
Part of the challenge in increasing market availability for AI-powered technologies is the fact that much of the advantages that they promise are confined to middle-of-the-system or back-end integrations. Most of the automated processes that AI technologies support may not ever show through to the patient experience, even if they offer pass-through benefits that enhance treatment outcomes or ease the overall process of image acquisition.
“At this time, a facility is going to experience more of an operational savings,” Muscosky said; “a better diagnosis, a more confident diagnosis. The benefits of AI are more focused on the facility as opposed to the patient. When you look forward, AI could be used for predictive diagnosis; this could lead to earlier treatment of the patient before a small issue becomes a large one. But I think we’re still years away from that.”
Muscosky also believes that AI technologies can help enhance decision-support tools in critical care areas. By speeding image acquisition time, or offering a more readily studied image within the viewer of a given modality, they might deliver actionable results to an ICU or trauma center physician without the necessity of waiting for a radiologist to read them. However, when a radiologist is sitting in a reading room with those same tools, he believes that physician may be more reluctant to use them versus trusting his own assessment of the situation, provided time is less of a factor.
“I think it depends upon what area you’re looking at,” Muscosky said. “Image processing and noise cancellation are built into the workflow, and they’re going to be more prominent; some people may not even know they’re built into their workflow, and it’s assisting them. I think as you get to decision support and automatic positioning of equipment, it’s going to take a little more time for adoption.”
Muscosky also believes that AI manufacturers are learning at the same time that their algorithms are, working to decipher where throughout the imaging process automation can offer the greatest value for their physician customers. Bottlenecks – from development costs to validation studies to availability of raw materials to market acceptance – notwithstanding, he predicts that the most significant remaining challenges are those illustrating the value that customers will realize from implementing AI products.
“The areas we’re working on now are more helpers,” he said: “reducing time to acquire an exam, reducing the number of retakes. As every facility has to make due with less people and have more patient throughput to be able to keep their business running, it’s important to be able to optimize the use of their radiographers’ time, and physicians’ time as well.
Jef Williams, managing partner of Paragon Consulting Partners of Sacramento, California, said that AI-powered technologies remain in a limited adoption cycle for reasons beyond the capabilities of the technology itself. Whether in academic medical centers that have the ability to leverage university-level computer and data science programs to help them build proprietary algorithms, or in a startup environment replete with private-equity cash, Williams sees the technology maturing. The challenge that remains is developing a revenue model that allows other organizations to adopt AI functionality within their workflow, which he described as “still a big hurdle in a lot of organizations.”
“Who’s going to pay for it?” Williams said. “Does it have an ROI? Whether it’s supporting alleviation of radiologist burnout, or it’s helping to prioritize studies to read in a worklist, there’s definitely some use cases out there that people are starting to take and run with, but the revenue model is still the biggest hurdle to overcome.”
Beyond simply finding ways to pay for the technology, Williams believes another significant barrier to overcome before AI becomes more ubiquitous in the imaging space is resolving its workflow impact. Even if the programs or modules that hit the market deliver proven outcomes that prove advantageous, they don’t necessarily fit into the radiologist’s workflow, he said.
“Where the AI sits and what it does to support the workflow is a big component, and has to be proven before it’s implemented,” Williams said.
Early adopters of AI primarily have found use cases for the technology that Williams described as “pixel-centric, or study/image-centric,” but he believes breakthrough opportunities remain to be unlocked in patient throughput, image generation and scheduling – namely, the processes that deliver a patient through all the stakeholders in the process of performing an imaging study.
“Right now, with so many people leaving the workforce for one reason or another, we’re all dealing with this staffing constraint,” Williams said. “We rely on staff to support a lot of manual workflows, and we’ll probably see a lot of movement in that space to support greater automation.”
“In the U.S., we certainly have a burnout issue and an overwork issue, but we don’t have a true physician shortage yet,” he said. “That could become a factor if we have to account for some component of AI to support the number of exams in a radiologist’s workflow versus the radiologists available to support some of these exams.”
More than likely, Williams believes, AI-powered technologies won’t become widely adopted until their automated functionalities become more seamlessly integrated within other components of medical imaging equipment – and he doesn’t see that happening to such a broad degree unless, or until, the federal government decides to make it a priority by authorizing additional funding or reimbursement dollars to such ends.
“In our current models, I think the lever that would have the most power in moving this would probably be the federal government,” Williams said. “It’s really hard to bake any more functionality into the technology we’re currently consuming without increasing the price of it. It’s not like the PACS vendors or the diagnostic imaging companies have a lot of margin where they can play with it; if we can’t get it that way, then we have to get it from reimbursement models.”
“What are the things that everyone wants to see done better and which can be handled by technology?” he asked. “That’s where we’re going to see the greatest adoption of AI in our industry.”