NEW

GE HealthCare Preps for SIR 2024

GE HealthCare will showcase its latest technologies in image guiding solutions, surgery, ultrasound and CT-navigation at the upcoming 2024 Society of Interventional Radiology (SIR) Annual Scientific Meeting taking place March 23-28 in Salt Lake City, Utah. The...

Samsung Accepts Healthcare Innovator Award

Boston Imaging, the United States headquarters of Samsung’s digital radiography and ultrasound business, was awarded HHM Health’s Healthcare Innovator Award

AHRA Co-Founder Passes Away

On March 2, 2024, Louise Broadley passed away at 101 years old. AHRA shared the news via an email and website post that reads, “She was a noble woman, who not only achieved many milestones within the imaging profession, but likewise paved the way for aspiring leaders...

Artificial Intelligence Paper Outlines FDA’s Approach to Protect Public Health and Promote Ethical Innovation

Today, the U.S. Food and Drug Administration (FDA) released its “Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” which outlines how FDA’s medical product centers are working together to protect public health while...

AI Results Drift: 10×10=105?

By Mark Watts

Mark WattsIf your calculator gave you a result of a calculation that 10×10=105, you would know that it was an error. We know that the output is not correct.

In the world of artificial intelligence (AI) algorithms, the results of a calculation can “drift” over time. This means the true relation between inputs and outputs change over time.

This is a serious consideration when treatment of patients can be influenced by the results.

Today some AI are legally classified as software as a medical device (SaMD). The International Medical Device Regulators Forum, a voluntary group of medical regulators across the world whose aim is to assist in the standardization of medical devices, defines SaMD as “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware device.” The Food and Drug Administration (FDA) has embraced this term. The FDA defines “medical purposes” under the Federal Food, Drug, and Cosmetic Act, as “those purposes that are intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions.” A system which looks for signs of diabetic retinopathy by autonomously analyzing images has received marketing authorization in the United States.

The FDA has cleared only “locked” AI-based SaMD algorithms. This is defined as an algorithm that provides the same results each time the same input is applied to it and does not change with use. AI systems can satisfy this definition if they are “fixed” in advance. However, this perspective has disadvantages. AI with machine learning can improve over time. It can become more accurate with more exposure to new datasets.

An early diagnostic chest AI algorithm was trained on male chest X-rays. The algorithm was locked. When it was deployed for research purposes in a hospital the AI identified every female chest X-ray as having bilateral soft tissue masses.

In my previous column “Seeing Color and Diversity with Imaging AI” I pointed out the need to include a more diverse training data set. If the FDA permits marketing of a locked AI algorithm that predicts breast cancer – but the AI was mainly trained using Caucasian women – the system will likely make false recommendations for African-American women who tend to have different breast densities than Caucasian women. However, if the algorithm learns continuously (is “adaptive”) and is used in clinics with more African-American women as patients, it will likely be able to make better recommendations in the future. Under current law, however, a change to the algorithm will likely require AI makers to undergo a new FDA premarket review. The maker of the AI may want to avoid the cost of another review or may feel that this additional review may send the wrong message about the quality of the current product.

The FDA has seen this problem and published a discussion paper on the modifications to AI-based SaMD. The paper suggests a “total product lifecycle regulatory approach” for adaptive AI-based SaMD that shall facilitate a fast cycle of product improvement by permitting such devices to learn continuously and shall ensure their safety and effectiveness. The maker will be given the option to update its SaMD to some extent after marketing authorization by submitting a so-called “predetermined change control plan” during the initial premarket review. This plan would include a description of the types of expected changes and methodology used to implement such modifications.

The FDA is ahead of other regulators in thinking about these issues. Its discussion paper, which will untimely lead into draft guidance, may likely shape the relevant regulatory architecture of other countries. This update problem could be improved by focusing on continuous monitoring and risk assessment rather than on predetermined change control plans. AI makers will often not know in advance the sorts of changes that might be needed. COVID-19 may increase the prevalence of pulmonary emboli or change the characteristic appearance so that the AI under or over diagnoses this life-threating condition.

AI will be helpful to health care. How quickly and effectively the value can be delivered is limited by our collective understanding of its limits and opportunities.

AI and machine learning algorithms will need to be monitored and the risk of “drift” must be acknowledged to promote trust and confidence that 10×10 will equal 100!

Mark A. Watts is the director of informatics, technology and artificial intelligence and sales at Medical Technology Management Institute.

Previous

Next

Submit a Comment

Your email address will not be published. Required fields are marked *

Open