Regulatory differences between locked and adaptive AI models in SaMD



Regulatory differences between locked and adaptive AI models in SaMD

Published on 07/12/2025

Understanding Regulatory Differences Between Locked and Adaptive AI Models in SaMD

The integration of artificial intelligence (AI) and machine learning (ML) into software as a medical device (SaMD) presents unique regulatory challenges. Regulatory bodies, including the U.S. Food and Drug Administration (FDA), have established frameworks that need to be understood by digital health, regulatory, clinical, and quality leaders. This article serves as a comprehensive guide on the regulatory differences between locked and adaptive AI models in SaMD, focusing on algorithm change control and predetermined change plans. By outlining the required steps for compliance, this tutorial will ensure that stakeholders are well-informed and can navigate FDA expectations effectively.

1. Overview of SaMD and the Role of

AI/ML

Software as a medical device (SaMD) refers to software intended for medical purposes without being part of a hardware medical device. It can include applications used on smartphones, tablets, or other electronic devices. The incorporation of advanced technologies such as AI and ML into SaMD has revolutionized the delivery of healthcare by enabling personalized medicine, improved diagnostics, and enhanced patient management.

However, the complexity of AI and ML technologies necessitates a robust regulatory approach. AI/ML-based SaMD can be classified into two main categories based on their adaptability: locked models and adaptive models.

A locked model is a SaMD that maintains a fixed algorithm without any changes after approval. In contrast, adaptive models are designed to learn from new data and improve their performance over time, which raises distinct regulatory considerations. Therefore, understanding the nuances between these models is essential for effective regulation and compliance.

2. Regulatory Framework for Locked Models

Locked models are typically viewed as having a static algorithm that is validated and approved by regulatory authorities. The FDA has established clear guidelines for change control management pertinent to such models, which are primarily detailed in FDA’s Software as a Medical Device (SaMD): Clinical Evaluation Guidance. This document outlines the following steps for compliance:

  • Pre-market Approval: For locked models, the pre-market approval process requires substantial evidence of the model’s safety and effectiveness, usually through clinical data. This leads to a well-defined validation protocol that demonstrates that the model performs as intended in various scenarios.
  • Defining Intended Use: It is crucial to clearly articulate the intended use of the SaMD. Any changes to the intended use would generally require a new pre-market submission.
  • Quality Management System (QMS): Implementing a robust QMS compliant with 21 CFR Part 820 is mandatory. This system ensures that the locked model adheres to thorough documentation and traceability.

Additionally, post-market monitoring is required to ensure the model continues to perform effectively under real-world conditions. Even though locked models do not undergo changes after approval, companies must remain vigilant in their post-market surveillance efforts.

3. Regulatory Framework for Adaptive Models

Adaptive AI/ML models can continuously learn and evolve based on new data inputs, which presents unique regulatory challenges. The FDA’s acceptance of adaptive models stipulates a need for a clear predetermined change plan. This plan outlines how changes will be managed over time. Reference is made to the FDA’s Digital Health Innovation Action Plan, which emphasizes the importance of transparency and robustness in adaptive algorithms.

Key elements of a regulatory strategy for adaptive models include:

  • Predetermined Change Plan: Before bringing an adaptive model to market, organizations must develop a predetermined change plan that explains how changes will be made and validated post-market. This should include criteria for when updates will be applied, and how risk management will be handled throughout the lifecycle of the model.
  • Continuous Monitoring: Companies are expected to continuously monitor the performance of the adaptive model. This real-time data analysis involves both clinical and operational metrics, ensuring any performance issues can be addressed promptly.
  • Engagement with Regulators: Secure ongoing dialogue with the FDA, or relevant bodies, throughout the model’s life cycle is crucial. This engagement allows for adaptive models to align with changing regulatory expectations and assess risks stemming from model drift.

4. Change Management in SaMD: Locked vs. Adaptive Models

Change management procedures differ significantly between locked and adaptive models. For locked models, any change typically triggers the requirement for a new pre-market submission or may need to follow a specific process for major amendments. Conversely, adaptive models require proactive change management protocols that include mechanisms for documenting and validating ongoing changes.

Key considerations for change management include:

  • For Locked Models:
    • Changes must be pre-defined and documented in the validation strategy.
    • Established quality assurance practices govern the decision process for product change.
    • Regulatory compliance is established through approval submissions for any modifications.
  • For Adaptive Models:
    • Changes are dynamically implemented in response to new data.
    • Risk assessments must evaluate the impact of any updates and adapt the predetermined change plan accordingly.
    • Data Scrutiny: Continuous scrutiny of data integrity and model reliability after changes is essential to ensure patient safety and product efficacy.

5. Addressing Model Drift in AI/ML SaMD

Model drift represents the empirical decline in model performance over time due to various factors, including variations in clinical practices, data collection methods, or shifts in the underlying population demographics. Addressing model drift is imperative for adaptive AI/ML SaMD to assure continued performance efficacy and compliance with regulatory expectations.

The FDA recommends that companies implementing adaptive models establish robust metrics to monitor for signs of model drift. These include:

  • Performance Metrics: Companies should define baseline performance metrics before deployment and compare these against ongoing performance data collected during the model’s operational phase.
  • Feedback Loops: Developing mechanisms for clinician feedback can offer insight into unexpected drifts in decision-making or efficacy.
  • Regular Updates: Scheduled updates to the adaptive model may be necessary to account for recognized drift, with documented change management practices in place to evaluate and implement these changes.

6. Post-Market Monitoring and Compliance Strategies

Post-market monitoring strategies for both locked and adaptive SaMD are essential for ongoing compliance and safety assurance, though they differ notably in their execution due to the adaptability of the models. The post-market surveillance requirements as per 21 CFR Part 803 involve reporting any adverse events associated with the use of the device.

For locked models, manufacturers must maintain updated performance data and report any significant adverse event or degradation in model performance that could affect patient safety.

In the case of adaptive models, post-market strategies must be more dynamic, incorporating real-time data analytics and continuous monitoring frameworks to address issues of model drift and performance degradation rapidly:

  • Real-time Data Solutions: Incorporating real-time analytics ensures that manufacturers can immediately recognize shifts in model performance that suggest drift.
  • Iterative Auditing: The ability to conduct regular and iterative audits of model performance against predetermined metrics ensures compliance and efficacy.

7. Strategic Recommendations for Compliance

To successfully navigate the regulatory requirements associated with both locked and adaptive AI models in SaMD, organizations should consider the following strategic recommendations:

  • Early Engagement with Regulators: Engage with the FDA early in the development process to gain insights that can help inform the regulatory strategy for both model types.
  • Invest in Quality Management: Implement robust quality management systems (QMS) that meet 21 CFR Part 820 standards to ensure traceability, documentation, and compliance.
  • Develop Comprehensive Change Plans: For adaptive models, create predetermined change plans to ensure all changes are documented, validated, and compliant with regulatory expectations.
  • Focus on Robust Post-Market Surveillance: Establish clear post-market monitoring processes to ensure ongoing compliance and efficacy for both locked and adaptive models to ensure patient safety.

8. Conclusion

Understanding the regulatory differences between locked and adaptive AI models in Software as a Medical Device is crucial for ensuring that developers remain compliant with FDA expectations. By implementing a well-defined change control strategy, adhering to predetermined change plans, and actively monitoring the performance of these models, organizations can effectively manage compliance risks associated with SaMD. Adhering to best practices in regulatory submissions and post-marketing surveillance will not only enhance patient safety but also facilitate smoother interactions with regulatory authorities, positioning organizations favorably in the evolving landscape of digital health solutions.

See also  Using central statistical monitoring to detect data anomalies and fraud