Preparing briefing packages on AI use for FDA and health authority meetings

Preparing briefing packages on AI use for FDA and health authority meetings

Published on 06/12/2025

Preparing Briefing Packages on AI Use for FDA and Health Authority Meetings

The integration of Artificial Intelligence (AI) and Machine Learning (ML) in Good Practice (GxP) quality systems has created a paradigm shift in the pharmaceutical and biotech industries. As regulatory professionals strive to meet FDA expectations and guidelines, producing comprehensive briefing packages that address these expectations becomes crucial for successful dialogue with health authorities. This regulatory explainer manual aims to clarify the relevant regulations, guidelines, and agency expectations surrounding the use of AI/ML in GxP quality systems.

Regulatory Affairs Context

As AI technologies become more prevalent in drug development, manufacturing, and quality assurance, understanding how these technologies align with regulatory requirements is imperative. Regulatory Affairs (RA) professionals must not only grasp the nature and capabilities of AI but also appreciate how these systems can meet Good Manufacturing Practice (GMP), Good Clinical Practice (GCP), and Good Distribution Practice (GDP) requirements.

Understanding the intersection of AI technology with various regulatory frameworks is essential for compliance and operational excellence. This involves a robust understanding of:

  • The U.S. Food and Drug Administration (FDA) regulations.
  • The European Medicines Agency (EMA) guidelines.
  • The Medicines
and Healthcare products Regulatory Agency (MHRA) expectations.
  • International Council for Harmonisation (ICH) standards.
  • Legal/Regulatory Basis

    The deployment of AI/ML technologies within GxP settings must adhere to the legal framework that governs pharmaceutical and biopharmaceutical activities. Key regulations and guidelines are as follows:

    FDA Regulations

    The FDA has provided guidance on software as a medical device (SaMD), which includes AI/ML technologies. The relevant sections include:

    • 21 CFR Part 820: Quality System Regulation (QSR) which mandates design controls that touch upon AI software lifecycle management.
    • FDA Draft Guidance on Artificial Intelligence/Machine Learning: This outlines the FDA’s approach to the regulation of AI technologies, focusing on a total product lifecycle approach.

    EMA Guidelines

    The EMA has issued specific recommendations on the use of AI in quality management that focus on:

    • The robustness of data handling practices in AI algorithms.
    • Performance validation of AI systems in a regulated environment.

    MHRA Expectations

    The MHRA emphasizes the need for AI applications to maintain data integrity and quality assurance. The agency’s published guidelines encourage robust risk management practices to ensure that AI outputs are sufficiently reliable.

    ICH Standards

    ICH guidelines, particularly ICH Q8, Q9, and Q10, are pivotal in ensuring the quality of pharmaceuticals and biopharmaceuticals. They stress the importance of incorporating quality by design (QbD) principles, management of quality risks associated with AI, and continuous quality improvement processes when using AI/ML.

    Documentation Requirements

    Creating a thorough briefing package for FDA and other health authority meetings involves compiling comprehensive documentation. This documentation not only supports compliance but also aids regulatory reviewers in understanding the implications of AI usage. Key components include:

    Technical Files

    Technical files should detail:

    • Architecture and algorithms utilized in the AI/ML systems.
    • Data sources for training and validation of the AI systems.
    • Risk assessment and mitigation strategies regarding data security and quality impact.

    Validation and Verification Reports

    Companies should present:

    • Evidence of system validation following relevant regulations.
    • Details on ongoing monitoring of AI decision-making processes, including performance metrics.

    Quality Management Plan (QMP)

    A comprehensive QMP specific to the AI systems should demonstrate compliance with GxP requirements and explain quality oversight, including:

    • AI-specific quality objectives and performance indicators.
    • Responsibilities and roles of personnel interacting with the AI systems.

    Review/Approval Flow

    Understanding the regulatory pathways related to AI/ML applications is critical. The flow for review and approval can be outlined as follows:

    Pre-Submission Activities

    Prior to seeking regulatory approval, companies should:

    • Engage in informal meetings or consultations with regulatory authorities to clarify expectations.
    • Prepare comprehensive introductory materials that clearly capture the innovative aspects of the AI technology.

    Submission to Regulatory Authorities

    Once the initial documentation is ready, the formal submission process involves:

    • Filing a pre-market submission or an investigational new drug (IND) application with the FDA.
    • Seeking a scientific advice meeting with the EMA for proactive engagement.
    • Utilizing the MHRA’s submission pathway for innovative technologies in biopharmaceuticals.

    Regulatory Review Process

    Following submission, the review process typically encompasses:

    • Assessment of the technical package focusing on validation, performance metrics, and overall compliance.
    • Communication from regulatory authorities may include requests for additional information (RAIs) regarding AI algorithm performance and safety.

    Post-Approval Monitoring

    After approval, ongoing monitoring of AI systems is mandated to ensure continued compliance, which includes:

    • Regular reporting of any significant deviations or changes in AI performance.
    • Adapting quality oversight to align with the evolving landscape of AI technology.

    Common Deficiencies and How to Avoid Them

    In preparing briefing packages and submissions, companies must be vigilant regarding common deficiencies identified by regulatory authorities. Key deficiencies can include:

    Insufficient Evidence of Performance

    Regulatory agencies often request evidence demonstrating the reliability and accuracy of AI systems. Deficiencies here can be mitigated by:

    • Providing comprehensive validation studies and comparative analyses against existing methodologies.
    • Collaborating with statisticians for robust data analysis and interpretation methods.

    Lack of Risk Management Documentation

    Agencies expect robust risk assessments related to software used in GxP environments. To mitigate deficiencies:

    • Implement a proactive risk management framework that encompasses identification, assessment, and mitigation strategies tailored to AI.
    • Incorporate real-time monitoring and feedback mechanisms to adapt to unforeseen challenges.

    Inadequate Training and Oversight Procedures

    Another common concern is inadequate training of personnel using AI systems. Companies must:

    • Ensure comprehensive training programs that encompass ethical AI usage, caregiving responsibilities, and compliance oversight.
    • Maintain detailed records of training attendance and competency assessments.

    RA-Specific Decision Points

    As regulatory professionals navigate the complexities of AI/ML integrations, certain decision points become critical:

    When to File as a Variation vs. New Application

    The determination of whether to file as a variation or a new application hinges on the extent of AI integration. Consider:

    • Filing a variation if the AI system is an enhancement of existing processes without altering intended use.
    • Opting for a new application if the AI introduces significant changes in functionality that impact safety or efficacy.

    How to Justify Bridging Data

    Bridging data may be required when introducing AI. The justification should include:

    • Scientific rationale for the data bridging approach, outlining similarities between AI outputs and traditional methods.
    • Data analytics demonstrating minimal variance in quality outcomes aligned with GxP standards.

    Conclusion

    In conclusion, preparing briefing packages regarding the use of AI in GxP quality systems involves comprehensive documentation, adherence to regulatory guidelines, and proactive engagement with regulatory authorities. Understanding the legal basis, submission requirements, and avoiding common pitfalls are essential strategies for regulatory professionals. By navigating this complex landscape with diligence and foresight, companies can position themselves to leverage AI technologies while maintaining compliance and operational integrity.

    For further insights and detailed guidelines on the expectations surrounding AI in GxP quality systems, organizations are encouraged to consult FDA AI guidance, EMA Guidance, and MHRA Website.

    See also  Post COVID lessons shaping FDA expectations for trials, supply chains and quality