How to document AI use cases in quality manuals and procedures

How to document AI use cases in quality manuals and procedures

Published on 04/12/2025

How to Document AI Use Cases in Quality Manuals and Procedures

Context

As artificial intelligence (AI) and machine learning (ML) technologies become increasingly prevalent in pharmaceutical and biotech sectors, understanding the regulatory landscape surrounding their use in Good Practice (GxP) quality systems is critical. Regulatory authorities like the FDA, the European Medicines Agency (EMA), and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) have issued guidelines that delineate expectations for the documentation of AI use cases. This article serves as a manual for regulatory affairs professionals to navigate the complexities of integrating AI into quality systems while remaining compliant with applicable regulations.

Legal/Regulatory Basis

The legal and regulatory landscape governing the use of AI in quality systems encompasses a variety of guidelines and frameworks. Key documents include:

  • 21 CFR Part 11: This regulation outlines the criteria under which electronic records and electronic signatures are considered trustworthy, reliable, and equivalent to paper records.
  • ICH Q10: This guideline provides a comprehensive approach to effective pharmaceutical quality systems, emphasizing a holistic view of quality throughout the product lifecycle.
  • FDA AI/ML Software as a Medical Device (SaMD) Action Plan: This outline provides
a detailed overview of how AI models should be developed, validated, and monitored, explicitly addressing the need for appropriate documentation and reporting.

In the context of AI, regulatory expectations involve ensuring that algorithms are validated, that data used is representative and robust, and that transparency is maintained throughout the deployment of AI systems.

Documentation Requirements

Documentation is a critical aspect of integrating AI into GxP quality systems. The following guidelines should be adhered to:

  • Use Case Definitions: Clearly articulate the objectives of the AI implementation, including intended use, domain of application, and performance criteria.
  • Risk Assessment: Conduct a thorough risk assessment as per ICH Q9 guidelines. This should detail potential risks associated with the AI use case, including risks to data integrity, patient safety, and process outputs.
  • Validation Strategy: Document the validation strategy to be employed for the AI models, including pre-validation, validation phases, and post-deployment monitoring plans.
  • Data Governance: Ensure a comprehensive data governance framework that covers data collection, processing, storage, and sharing practices. Document the data sources, datasets used for training, and any data preprocessing methods applied.

Use Case Examples

To effectively demonstrate the application of AI in specific areas, detailed examples may include:

  1. Quality Control (QC): Implementing AI to predict inspection failures based on historical data.
  2. Risk Management: Utilizing ML algorithms to predict patient adverse events based on clinical trial data.

Review/Approval Flow

Integrating AI into GxP quality processes carries a structured review and approval flow, which may include:

  • Internal Review Board: Establish an internal governance body that includes stakeholders from QA, clinical, IT, and regulatory affairs to review AI use cases.
  • Regulatory Submission: Determine whether the AI implementation requires a submission to regulatory authorities, such as a new application or a variation. Address appropriate justifications for the chosen pathway.

Decision Points

Key decision points in the review and approval process include:

  1. Filing as Variation vs. New Application: Assess whether the AI application alters the intended use of the product. If it does not, it may be filed as a variation; otherwise, a new application might be necessary.
  2. Bridging Data Justification: If the AI model uses extrapolated data, thorough justification must be presented, including the scientific rationale for bridging data based on existing datasets.

Common Deficiencies and How to Avoid Them

As regulatory submissions are scrutinized, common deficiencies can arise if documentation practices are not strictly followed. Some of the most prevalent issues include:

  • Inadequate Validation Documentation: Ensure a robust validation plan is established, including training data sets, performance metrics, and any necessary adjustments to algorithms.
  • Poor Risk Management Practices: Engage in continuous risk assessment as models evolve, and document all risk mitigation measures taken to address previously identified risks.
  • Insufficient Data Governance: Implement a strong data governance policy. Document the provenance of data and ensure compliance with data protection regulations, such as GDPR in the EU.

Common Agency Questions

When reviewing submissions involving AI use, regulatory bodies like the FDA may raise questions such as:

  • How does the algorithm ensure data integrity?
  • What measures have been taken to validate the model’s predictive capabilities?
  • How are biases in training data addressed?

Practical Tips for Documentation

To ensure regulatory compliance when integrating AI into quality systems, consider the following practical documentation strategies:

  • Regular Updates: Keep all documentation up to date, reflecting changes in technology, policy, or regulations.
  • Templates and Standard Operating Procedures (SOPs): Utilize standardized templates to ensure consistency and comprehensiveness across all documentation.
  • Audit Trails: Establish a clear audit trail that records all modifications to AI models and associated documentation.

Conclusion

As AI and ML continue to impact the quality landscape within the pharmaceutical and biotech industries, aligning with regulatory expectations is essential. By adhering to the guidelines set forth by regulatory authorities and maintaining comprehensive documentation practices, professionals can navigate the complexities of AI integration into GxP quality systems with confidence.

See also  Risk based frameworks for approving AI tools inside QMS environments