Governance models for AI deployment in FDA regulated quality systems


Governance models for AI deployment in FDA regulated quality systems

Published on 04/12/2025

Governance Models for AI Deployment in FDA Regulated Quality Systems

This article serves as an extensive guide on the governance models for deploying Artificial Intelligence (AI) and Machine Learning (ML) in Good Practice (GxP) quality systems within the highly regulated environments of the FDA, EMA, and MHRA. With the increasing integration of advanced technologies in the pharmaceutical and biotech sectors, regulatory affairs (RA) professionals must navigate a complex landscape of expectations, regulations, and guidelines to ensure compliance and uphold product quality.

Context

The pharmaceutical and biotech industries are undergoing a transformation fueled by the integration of AI and ML technologies. These advancements possess the potential to significantly enhance processes within GxP quality systems, impacting areas such as manufacturing, quality assurance (QA), quality control (QC), and regulatory compliance. However, the deployment of AI/ML systems must align with rigorous regulatory frameworks established by agencies such as the U.S. Food and Drug Administration (FDA), the European Medicines Agency (EMA), and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA).

Legal/Regulatory Basis

Understanding the regulatory basis for AI/ML integration into GxP quality systems is crucial for compliance. In the U.S., the regulatory landscape is primarily guided

by 21 CFR Part 11, which outlines the requirements for electronic records and signatures, as well as Part 820, which pertains to quality system regulation (QSR). In Europe, the EU Regulations (EU) 2017/745 for medical devices and (EU) 2017/746 for in vitro diagnostic medical devices address digital technologies, laying the groundwork for AI/ML use in quality systems. The MHRA adheres to similar guidelines while considering UK’s current regulatory framework post-Brexit.

The FDA AI guidance offers additional insights on the agency’s perspectives regarding AI, emphasizing the importance of a comprehensive risk management approach and a strong governance model to ensure the safety and efficacy of AI/ML implementations. These requirements aim to foster trust in AI systems and underscore the importance of accountability throughout their lifecycle.

See also  Risk based frameworks for approving AI tools inside QMS environments

Documentation

Proper documentation is foundational when developing governance models for AI/ML systems in GxP environments. A well-structured documentation strategy supports regulatory compliance and facilitates transparency. Key documentation elements include:

  • Validation Plans: Document the approach to validate AI and ML models, including data sources, annotation procedures, and validation techniques.
  • Quality Management System (QMS) Integration: Incorporate AI/ML processes into existing QMS documentation, ensuring alignment with regulatory expectations and quality policies.
  • Risk Management Files: Maintain comprehensive risk assessments and management strategies pertaining to AI/ML applications.
  • Standard Operating Procedures (SOPs): Develop SOPs for the development, implementation, and maintenance of AI/ML systems to ensure consistent application of regulatory requirements.
  • Change Control Documentation: Establish processes to manage changes to AI systems while considering the impact on quality systems.

Review/Approval Flow

Understanding the appropriate review and approval flow is essential when submitting AI/ML systems for regulatory assessment. The review process typically encompasses the following stages:

  1. Pre-Submission Interaction: Engage with regulatory agencies early to discuss the proposed AI/ML system and obtain feedback on documentation requirements and expectations.
  2. Submission: Prepare and submit an application, including details on the AI/ML model, validation data, and risk assessments as per regulatory standards.
  3. Agency Review: Regulatory bodies conduct a thorough review of submitted documentation, scrutinizing the methods, validation, and application of the AI system.
  4. Approval or Request for Additional Information: Following the review, agencies may grant approval or request further clarification on aspects such as model performance, data integrity, or system governance.

Common Deficiencies

Encountering deficiencies in regulatory submissions can lead to significant delays and increased costs. Some common areas of concern identified during agency reviews for AI/ML applications include:

  • Insufficient Validation Data: Lack of comprehensive validation supporting the reliability and accuracy of AI/ML models can prompt regulatory pushback.
  • Inadequate Documentation: Failure to maintain thorough and meticulous records of AI/ML development, validation, and implementation processes could lead to compliance failures.
  • Poor Risk Management Procedures: Inadequate risk assessments that fail to address potential ethical and safety implications associated with AI decision-making may raise red flags with regulators.
  • Lack of Change Control: Insufficient processes for managing changes to AI systems can compromise quality controls and compliance adherence.
See also  Future of validation KPIs predictive, real time and AI driven performance indicators

RA-Specific Decision Points

Regulatory affairs professionals should assess specific decision points throughout the governance model development and submission processes. Key considerations include:

AI/ML as a Variation vs. New Application

Determining whether to file a change as a variation or a new application hinges on the degree of modification to the AI system. If the modifications significantly alter the intended purpose or potential risks associated with the AI application, a new application may be warranted. Conversely, minor updates, such as algorithm adjustments or performance enhancements, could fall under the variation category. It is essential to closely evaluate the implications of each approach not only for compliance but also for potential market access concerns.

Justifying Bridging Data

When addressing gaps in data for an AI/ML system that relies on historical non-AI data, it is vital to provide strong justifications for utilizing bridging data. Clear articulation of how bridging data supports the model’s efficacy and safety—combined with robust documentation and validation practices—reinforces the integrity of the submission. Regulatory agencies tend to scrutinize such justifications extensively, necessitating a well-prepared rationale rooted in scientific and regulatory standards.

Establishing Responsible AI Policies

To effectively implement AI within GxP systems, regulatory professionals must develop responsible AI policies. Such policies should encompass:

  • Ethical Considerations: Ensure that AI applications prioritize patient safety and ethical standards in decision-making processes.
  • Continuous Monitoring: Implement mechanisms to continuously evaluate AI performance and calibrate models over time.
  • Stakeholder Engagement: Foster open channels of communication among stakeholders, ensuring their insights are integrated into the AI governance framework.

In conclusion, building a comprehensive governance model for AI/ML deployment in FDA-regulated quality systems is critical for ensuring that these technologies enhance rather than compromise product quality and patient safety. Regulatory affairs professionals must adeptly navigate the landscape of legal requirements, documentation standards, review processes, and common pitfalls while leveraging their expertise to achieve compliance and operational excellence.

See also  Oversight of CRO and vendor contributions to safety reporting processes