By Mark Watts
What topics will truly be the most talked about topic at RSNA 2025? The poll question was posted on LinkedIn. The results: AI 54%, Photon Counting CT 25%, Theranostics 15%, Hybrid Guided Therapies 4%, Other 1%.
My observations from this year at RSNA is informed by 20 years of attendance.
Big iron MRI/CT had a smaller footprint while AI-enhanced products dominated the conference.
One hot topic was faster and lower noise MRI images. Anyone who does not use a noise-reduction AI algorithm on MRI image is less productive. SwiftMR by Airsmedical was one of the first applications; it is now starting to appear in new MRIs and is available from other manufacturers. If you have an older MRI and don’t want to upgrade it with the embedded noise reduction AI, you can add the SwiftMR software. The presentation “Real-World Implementation of AI in MRI: Five Years of Productivity Gains” shared interesting insights. One needs to create new protocols to facilitate the AI impact. Depending on the body part, scan times are reduced by about 45%, reducing the risk of motion artifacts and patient anxiety and, last but not least, allowing one to perform twice as many patient studies. This practice can increase capacity and potential revenue. Another positive side effect of applying AI is the ability to create a normalized view, which is essential if an institution has scanners from multiple vendors with field strengths ranging from 0.2T to 3T.
Additional takeaways include:
- Holographic 3-D screens: Virtual reality (VR) goggles have been demonstrated for several years; however, they have never really taken off. The latest breakthrough is the availability of new monitor screens with an extra layer fused to their surfaces, which create two images, one for the left eye and one for the right eye. There are two tiny cameras built into the top bezel that track the eye’s position and project these images to create a sense of depth. It can only do this for a single person, and you have to stand or sit directly in front of the screen, but the effect is fantastic. This might finally push VR into the medical field. I had one radiologist tell me he thought the monitor companies could lose market share quickly.
- Stand-alone AI applications are on the way out. AI applications that run specialized, small and optimized ML models for a single body part will end up in the modality – either on a GPU, a CPU or dedicated hardware. As of today, these algorithms can provide a finding within 10 seconds on a typical modality CPU, which suffices. I forecast that those handful of AI applications, which would represent 68% of the workload, will be in the modality, embedded in the PACS, or at the viewing station. The remaining 32% of the AI workload, which could include tens, if not hundreds, of different AI applications, would be best hosted on an AI platform that can perform AI on-prem or in the cloud, depending on the specific AI vendor or provider’s architecture.
How will these AI results be shown to the physician? Instead of sending them directly to the PACS as a Secondary Capture Image or Annotation in the form of a Presentation State (GSPS) or embedded in a DICOM Structured Report (SR), it makes sense for a radiologist to review them first, and, accept or reject the AI results or even modify them (e,g, adjusting an annotated outline of a finding). Initially, some AI vendors made an image viewer available to review and change the AI results, but it became apparent that physicians don’t want to deal with another viewer. Hence, a small “widget” embedded in the PACS viewer, typically in a corner, signals the radiologist of any AI findings. If desired, they can click the widget to expand it and perform any AI-related operations.
- Open architecture PACS is here: Just as we think that PACS systems are mature and there is no place for new entrants or new approaches, there is AdvaPACS, a company based out of Asia that just introduced its product in the U.S. What is refreshing about AdvaPACS is its open architecture, which allows you to use its own viewer or other zero-footprint viewers, such as medDream or eUNity. You have a choice of AI platforms, as they are integrated with CARPL, TestDynamics or Harrison’s open AI. You can use their integrated RADPAIR AI-powered reporting or use another one. No upfront costs, pay-as-you-go, and they claim that you can go live in as little as 15 minutes. Their business and deployment model will undoubtedly shake up the established PACS vendor community.
- A second set of eyes and ears: AI deployment in reporting. AI can fill in information into predefined templates, extract relevant data from spoken text, and create a report that matches the style and preferred reporting method of a particular radiologist in less time than it takes today. The claim is that it changes the radiologist’s role from word processor and editor to what they are intended to do, i.e., focusing on the image. Significant savings in reporting time can be achieved, i.e., up to 25%, which, with the increase in imaging studies and shortage of radiologists, is a big bonus. I am still a little concerned about the AI integration results. Many AI algorithms are still creating Secondary Capture images, some Presentation States, and some Structured Reports, in some cases translated into FHIR messages or available as FHIR resources.
- New 32 MP monitors have dethroned the 12-Megapixel monitors that have been the upper limit for medical-grade mammography monitors. These 12 MPs are actually two 6MPs that are “fused,” enabling them to show four images: two views, for left and right. The cost of the monitors depends on the manufacturer, but is typically around $20,000, with some manufacturers as low as $12,000. The 6MPs have a lower resolution than the actual image size, which means that if a radiologist wants to see pixel-for-pixel on the screen, they have to either zoom in or use a digital loupe, which requires an extra mouse click. With the introduction of BARCO’s 32MP display this year, images are displayed at their original resolution, so we are finally able to match the digital detector resolution with the display’s representation. Combined with a light output of 1,200 Candela these are top of the line.
Mark Watts is an experienced imaging professional who founded an AI company called Zenlike.ai.

