By Mark Watts
Editor’s Note: This is part one of a four-part series on FDA post deployment requirements for Software as a Medical Device.
Artificial intelligence (AI) is in healthcare systems and there has been a recent change to the post-deployment requirement from the FDA. My goal is to raise awareness of the requirements and help define roles for responsibility.
The FDA’s evolving regulatory framework for Software as a Medical Device (SaMD), particularly AI/ML-enabled medical devices, represents a fundamental shift from traditional medical device regulation. Unlike static hardware devices, AI-based SaMD continuously learns, adapts and updates – creating unique challenges for safety, effectiveness and cybersecurity assurance.
The FDA’s approach, outlined in multiple guidance documents including the 2023 “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions,” emphasizes a total product life cycle (TPLC) regulatory approach requiring ongoing post-market surveillance, performance monitoring and cybersecurity maintenance.
Healthcare organizations deploying AI-based SaMD must understand that FDA compliance is not a one-time premarket approval event but an ongoing obligation extending throughout the device’s operational lifetime.
The convergence of FDA medical device regulations, HIPAA security requirements, and emerging AI-specific guidance creates a complex compliance landscape that demands systematic planning, documentation and continuous monitoring.
TPLC for AI/ML SaMD – The Paradigm Shift
Traditional medical devices were regulated primarily at the premarket stage – manufacturers demonstrated safety and efficacy before commercialization, and post-market requirements focused on adverse event reporting and recalls when problems emerged. This model assumed the device would function identically throughout its life cycle unless explicitly modified through new regulatory submissions.
AI/ML medical devices fundamentally break this model. Machine learning algorithms are designed to learn from new data, adapt to changing patient populations and continuously improve performance. An AI diagnostic algorithm deployed in January may function quite differently by December after processing millions of additional patient cases. This “locked algorithm” versus “adaptive algorithm” distinction drives the FDA’s TPLC approach.
The FDA recognizes three types of modifications to AI/ML-enabled devices:
- Performance modifications: Changes to the algorithm’s clinical performance characteristics, such as sensitivity, specificity or predictive accuracy
- Input modifications: Changes to the types or sources of data the algorithm accepts
- Intended use modifications: Expansions or changes to the clinical conditions, patient populations or use contexts
The TPLC approach requires manufacturers to establish predetermined change control plans (PCCPs) describing the types of modifications they anticipate making, the methodology for implementing changes safely and the monitoring mechanisms ensuring modifications don’t compromise safety or effectiveness. Crucially, healthcare organizations deploying these devices share responsibility for monitoring performance and reporting issues.
Cybersecurity as a Continuous Obligation
The FDA’s 2023 guidance “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions” (draft) establishes cybersecurity as an ongoing quality system requirement, not merely a premarket checkbox. This guidance, combined with the 2014 “Content of Premarket Submissions for Management of Cybersecurity in Medical Devices” and 2016 postmarket guidance, creates a comprehensive framework emphasizing:
- Security by Design: Cybersecurity must be integrated throughout the device life cycle from initial design through decommissioning, with threat modeling, risk assessment and security testing embedded in the quality management system.
- Transparency and Software Bill of Materials (SBOM): Manufacturers must maintain comprehensive documentation of all software components, including third-party libraries, open-source components and dependencies – enabling rapid vulnerability identification and patching.
- Coordinated Vulnerability Disclosure: Manufacturers must establish processes for receiving, assessing and responding to vulnerability reports from security researchers, healthcare organizations and other stakeholders.
- Continuous Monitoring and Updates: Post-market cybersecurity requires ongoing threat intelligence monitoring, vulnerability scanning, penetration testing and timely security updates – all while maintaining device functionality and clinical performance.
For AI/ML devices, cybersecurity complexity increases exponentially. Training data poisoning, adversarial attacks designed to fool algorithms, model inversion attacks extracting sensitive training data and model theft represent novel threat vectors beyond traditional software vulnerabilities. The FDA expects manufacturers and healthcare organizations to address these AI-specific risks systematically.
In Part 2, we will explain the process steps to meet these requirements.
Mark Watts is an experienced imaging professional who founded an AI company called Zenlike.ai.

