Global convergence of FDA, EMA and MHRA positions on AI in GxP systems


Global convergence of FDA, EMA and MHRA positions on AI in GxP systems

Published on 06/12/2025

Global convergence of FDA, EMA and MHRA positions on AI in GxP systems

Artificial Intelligence (AI) and Machine Learning (ML) are changing the landscape of Quality Systems (GxP) within the pharmaceutical and biotech industries. Regulatory expectations set forth by the FDA, EMA, and MHRA provide essential guidelines to ensure that AI and ML implementations maintain the integrity, safety, and efficacy of drug products. This regulatory explainer manual delves into the FDA’s expectations for AI in GxP quality systems, elucidating the relevant regulations and guidelines, decision-making points, and potential pitfalls to avoid in order to meet compliance across jurisdictions.

Context

The integration of AI and ML technologies into Good Practice (GxP) systems aims to enhance operational efficiency, improve data integrity, and facilitate better decision-making processes. However, the application of these technologies necessitates a clear understanding of regulatory expectations to ensure that the systems comply with safety and quality standards across the US, EU, and UK.

As pharmaceutical companies adopt AI-driven methodologies, they must consider how these systems affect their compliance with regulations governing Good Manufacturing Practices (GMP), Good Clinical Practices (GCP), and Good Laboratory Practices (GLP). Therefore, navigating the

requirements set forth by regulatory authorities is critical to maintaining product quality and regulatory compliance.

Legal/Regulatory Basis

The regulatory oversight of AI and ML in GxP systems is defined primarily by various legislation and guidelines in the respective jurisdictions.

United States: FDA Regulations

The FDA’s regulatory framework primarily revolves around the Federal Food, Drug, and Cosmetic Act (FDCA) and its implementing regulations found in Title 21 of the Code of Federal Regulations (CFR). Key sections relevant to AI include:

  • 21 CFR Part 11: Covers electronic records and electronic signatures, emphasizing the importance of data integrity, audit trails, and system validation.
  • 21 CFR Part 820: Outlines the quality system regulations for medical devices, which also relate to software validation requirements.
  • FDA Guidance on Software as a Medical Device (SaMD): This guidance outlines how the FDA regulates software intended for medical purposes and addresses the use of AI and ML in developing SaMD.
See also  Selecting the right statistical tests for PPQ data analysis and reports

Additionally, the FDA has published guidance documents specific to AI and ML, emphasizing the importance of data quality, validation processes, transparency, and traceability in AI systems to ensure compliance and patient safety.

European Union: EMA Regulations

In the EU, the European Medicines Agency (EMA) oversees the regulations for AI and ML applications within GxP systems through various directives and regulations including:

  • EU Regulation 2017/745: Governs medical devices but can extend to software tools used within GxP frameworks.
  • EU Regulation 2017/746: Similar to the former but applicable to in-vitro diagnostic devices.
  • EMA Guidelines on Quality Risk Management: These guidelines underscore the necessity for a robust risk management plan, especially when AI technologies are utilized.

The EMA has additionally issued position papers detailing considerations for AI implementation, focusing on the importance of ensuring that GxP remains applicable even as technologies evolve.

United Kingdom: MHRA Regulations

The Medicines and Healthcare products Regulatory Agency (MHRA) in the UK parallels the FDA and EMA’s approaches through its regulations, particularly under the UK Statutory Instruments (SI). Key documents include:

  • UK Medical Devices Regulations 2002: Similar to EU regulations, this governs the use of devices and software in GxP environments.
  • MHRA Guidance on Software as a Medical Device: This provides specific regulatory insights relevant to AI and ML applications in clinical and manufacturing settings.

As the UK transitions from EU regulations, companies must stay proactive about any changes in the regulatory landscape concerning AI and ML in GxP systems.

Documentation

Comprehensive documentation is crucial for compliance with regulatory requirements when incorporating AI and ML technologies into GxP systems. This documentation should include:

  • System Descriptions: Detailed descriptions of the AI or ML application, including its purpose, functionalities, and operational contexts.
  • Validation Documentation: Records demonstrating how the system meets intended use, including validation/verification testing results and qualification reports.
  • Risk Management Plans: Documentation that identifies, assesses, and mitigates risks associated with AI implementations, aligned with regulatory risk management guidelines.
  • Data Management Protocols: Detailed policies outlining data governance, data integrity measures, and data usage, especially in the training of AI models.
  • Maintenance and Audit Plans: Plans to assure ongoing compliance, regular assessments, and system performance monitoring.
See also  Training QA and RA teams on emerging FDA thinking for AI and ML

Review/Approval Flow

Follow a structured approach to the review and approval of AI and ML technologies in GxP systems to ensure compliance with regulations and to mitigate risks:

1. Pre-Submission Activities

  • Conduct a thorough analysis of the intended use and required regulatory pathway.
  • Engage with regulatory authorities through pre-submission meetings to discuss the intended application of AI and ML.
  • Collect and review all applicable documentation, ensuring compliance with software validation guidelines.

2. Submission to Regulatory Authorities

  • Submit the application in accordance with the identified regulatory pathway (e.g., New Drug Application (NDA), Marketing Authorization Application (MAA)).
  • Incorporate a clear explanation regarding the AI technology utilized, addressing training data sources, algorithms, and decision-making processes.

3. Agency Interaction and Review Process

  • Be prepared for a thorough review process, including addressing any agency inquiries swiftly and efficiently.
  • Submit responses that bridge any identified data gaps while ensuring clarity in justifying AI decisions and their implications.
  • Provide supplementary data or analyses as requested by regulatory authorities during the review phase.

4. Post-Approval Monitoring

  • Engage in continuous monitoring to assess the AI system’s performance in real-world settings.
  • Document lessons learned and insights drawn from post-market surveillance to inform future applications and submissions.

Common Deficiencies

Understanding common deficiencies that arise during agency reviews can help in avoiding pitfalls associated with AI and ML implementations:

  • Lack of Transparency: AI systems must be explainable; failure to demonstrate how decisions are made can lead to significant regulatory pushback.
  • Inadequate Validation: Insufficient validation testing and failure to establish data integrity can hinder approvals and result in major compliance deficiencies.
  • Poor Risk Management: Inability to effectively identify and mitigate risks associated with AI can lead to non-compliance with regulatory expectations.
  • Inconsistent Documentation: Lack of thorough documentation can create confusion regarding AI system functionalities, expectations, and intended use, complicating regulatory and audit processes.

RA-Specific Decision Points

Regulatory Affairs professionals face critical decision points when filing applications related to AI and ML systems. Understanding when to file as a variation versus a new application is paramount.

When to File as Variation vs. New Application

  • Variation: If AI technology contributes to an existing product’s functionalities or supports ongoing activities (e.g., improved data analytics), a variation may be suitable.
  • New Application: If the AI application leads to fundamentally new product indications or alters the safety profile significantly, a new application should be filed.
See also  Governance models for AI deployment in FDA regulated quality systems

Justifying Bridging Data

Bridging data may be necessary when transferring AI methods across different applications or indications. To successfully justify bridging data:

  • Provide comprehensive comparative analyses showing equivalence or superiority to previously validated methods.
  • Demonstrate consistent performance metrics across applications to instill confidence in the AI system’s capability.

Conclusion

As companies integrate AI and ML into their GxP systems, it is essential to recognize and adapt to the evolving regulatory landscape governed by the FDA, EMA, and MHRA. By understanding relevant guidelines, maintaining robust documentation, and addressing common deficiencies, companies can successfully navigate regulatory processes. Clear decision points ensure appropriate filing strategies and justifications for data bridging, promoting compliance and enhancing patient safety. Aligning with international regulatory expectations will enable the safe and effective integration of cutting-edge AI technologies into the pharmaceutical landscape.