Involving quality and regulatory early in AI proof of concept projects


Involving Quality and Regulatory Early in AI Proof of Concept Projects

Published on 04/12/2025

Involving Quality and Regulatory Early in AI Proof of Concept Projects

The introduction of Artificial Intelligence (AI) and Machine Learning (ML) into Good Practice (GxP) quality systems represents a significant evolution in the pharmaceutical and biotech industries. Given the complexities involved, regulatory professionals must understand the expectations set forth by regulatory agencies such as the FDA, EMA, and MHRA. Early involvement of quality and regulatory affairs in AI proof of concept projects is critical to ensuring compliance and addressing potential deficiencies proactively.

Context

In GxP settings, AI systems can enhance data management, predictive analytics, and decision-making processes. However, these technologies need to align with regulatory requirements to maintain product quality and patient safety. Regulating AI in pharma involves considering its application across various functions, including Quality Assurance (QA), Quality Control (QC), Clinical Development, and Pharmacovigilance (PV).

Legal/Regulatory Basis

The regulatory landscape surrounding AI and ML is evolving as agencies strive to provide clarity on acceptable practices. Key regulations and guidelines include:

  • FDA Guidance on the Use of AI and ML: The FDA has issued guidance documents that outline expectations for the development and validation of AI and ML technologies in the context
of GxP. This includes a focus on safety, effectiveness, and data integrity.
  • ICH Guidelines: ICH guidelines, particularly Q8 (Pharmaceutical Development), Q9 (Quality Risk Management), and Q10 (Pharmaceutical Quality System), emphasize the importance of a robust quality framework that accommodates innovations like AI.
  • EU Regulations: In the EU, the General Data Protection Regulation (GDPR) also interacts with AI applications, especially regarding data privacy and security measures that must be incorporated in quality management systems.
  • MHRA’s AI Strategy: The MHRA has published strategies for regulating digital health technologies, providing insights into expectations for compliance with UK regulations.
  • Documentation Requirements

    Effective documentation is essential when integrating AI into GxP-compliant systems. Key documents that should be developed include:

    • Project Plan: Outline the purpose, scope, and objectives of the AI proof of concept project.
    • Quality Plan: Define quality assurance measures throughout the project, including data validation processes and responsible personnel.
    • Risk Assessment: Document a comprehensive risk evaluation identifying potential hazards associated with AI usage and strategies for mitigation.
    • Validation Protocols: Establish protocols for validating AI algorithms, ensuring their performance aligns with established benchmarks and regulatory expectations.
    • Change Control Documents: Develop change control documentation to track modifications made to AI systems during development and deployment.

    Review/Approval Flow

    The review and approval flow for integrating AI solutions into GxP quality systems necessitates thorough communication and collaboration between various functions. The following steps outline a recommended workflow:

    1. Initiate Project: Engage relevant stakeholders (QA, regulatory, IT, etc.) to discuss the feasibility of the AI project.
    2. Prepare Documentation: Develop necessary documentation as outlined above.
    3. Internal Review: Conduct an internal review involving QA and regulatory teams to assess compliance and identify gaps.
    4. Submit for Regulatory Approval: If applicable, submit the project documentation to regulators for review, being mindful to address areas of particular agency interest.
    5. Implement AI Solution: Deploy the AI project following approved protocols and perform ongoing monitoring.
    6. Post-Implementation Review: After deployment, evaluate the AI system’s performance relative to predefined metrics and regulatory standards.

    Common Deficiencies

    During agency inspections and audits, certain common deficiencies are often identified in the context of AI implementations in GxP quality systems. Awareness of these pitfalls can aid in crafting a robust proof of concept project:

    • Lack of Clear Objectives: Projects lacking clear and measurable objectives may struggle to define success, leading to challenges in compliance and validation.
    • Inadequate Documentation: Insufficient documentation can raise significant concerns during regulatory reviews. Ensure comprehensive records are maintained throughout the project lifecycle.
    • Overlooked Training Needs: Personnel involved in managing AI systems must be adequately trained to understand system operation within GxP contexts. Failing to provide training can result in suboptimal usage and quality outcomes.
    • Failure to Monitor Performance: Continuous monitoring of AI algorithm performance is crucial. Agencies look for evidence of ongoing evaluation against performance targets.

    Regulatory Affairs-Specific Decision Points

    As regulatory professionals engage in AI proof of concept projects, several critical decision points must be addressed:

    When to File as Variation vs. New Application

    Determining whether an AI-related change requires filing a variation or a new application is a common challenge. The following considerations can guide this decision:

    • If the AI system affects the drug’s quality, safety, or efficacy in a way that necessitates reevaluation of the entire application, filing as a new application may be warranted.
    • Conversely, if the AI implementation improves operational efficiency without altering the product’s core characteristics or intended use, a variation might suffice.

    How to Justify Bridging Data

    In instances where the AI system utilizes bridging data to predict outcomes for validation or regulatory purposes, the rationale must be clearly articulated:

    • Identify and justify the applicability of bridging data in supporting the AI algorithm’s outcomes.
    • Present a robust rationale that establishes the scientific soundness of leveraging bridging data within the context of the specific proof of concept.
    • Ensure that all bridging data sources are compliant with existing regulatory standards to avert deficiencies during inspections.

    Practical Tips for Regulatory Documentation and Agency Queries

    When preparing documentation and formulating responses to agency inquiries, consider the following practical recommendations:

    • Anticipate Questions: Assess previous inspections for patterns in agency inquiries related to AI implementations and prepare thorough responses accordingly.
    • Maintain Clarity: Regulatory submissions should be concise and clear, avoiding technical jargon that may obscure key points.
    • Engage in Early Communication: Early discussions with regulatory authorities can provide insight into potential pitfalls and clarify expectations.

    Conclusion

    As AI and ML technologies increasingly permeate GxP quality systems, it is essential for regulatory affairs professionals to be proactive in their engagement with these innovations. By understanding the legal and regulatory landscape, effectively documenting processes, navigating review workflows, and addressing common deficiencies, organizations can better position themselves for compliance and success. The early inclusion of quality and regulatory considerations will streamline the path from concept to implementation, ensuring that the deployment of AI technologies meets the high standards required in pharmaceutical and biotechnology sectors.

    See also  FDA expectations for AI and machine learning in GxP quality systems