Published on 05/12/2025
Regulatory considerations when citing AI outputs in investigation reports
In the rapidly evolving landscape of pharmaceutical and biotechnology industries, Artificial Intelligence (AI) is becoming a pivotal tool, particularly in deviations investigations, root cause analysis, and Quality Management Systems (QMS) workflows. This article provides a comprehensive regulatory explainer manual aimed at regulatory affairs (RA) professionals navigating the complexities of incorporating AI outputs into investigation reports across different regulatory jurisdictions including the US, UK, and EU.
Context
As global regulatory bodies like the FDA, EMA, and MHRA increasingly recognize the potential of AI and machine learning (ML) in enhancing investigation processes, it is critical for professionals to understand the regulatory implications of using AI outputs. This article aims to articulate the expectations and requirements that govern how AI-enabled deviation investigations are conducted, documented, and justified within the regulated pharmaceutical landscape.
Legal/Regulatory Basis
Understanding the regulatory frameworks applicable to AI-enabled investigations is fundamental for compliance. The relevant regulations include:
- 21 CFR Part 820: This regulation refers to the Quality System Regulation (QSR) in the US, which focuses on the requirements for a quality management system for medical devices.
- EU Regulation 2017/745: This regulation governs medical devices and emphasizes
AI resources should be assessed under these regulations by ensuring that their outputs meet documentation and validation requirements, establish reliable quality systems, and align with the principles of data integrity. The intersections of various guidelines should inform RA interactions with other domains including Clinical, PV, and Quality Assurance (QA).
Documentation
When integrating AI outputs into investigation reports, meticulous documentation is essential. The documentation should encapsulate the following:
- AI Model Description: Contextualize the AI model used, including its algorithms, training datasets, and intended application. Ensure compliance with guidelines for ML and AI in quality systems.
- Production of Results: Ensure the results from the AI-enabled analysis are reproducible and verifiable, with clear links to the source data.
- Evidence of Validation: Document any validation conducted to assess the performance of the AI model, including bias and impact assessments.
- Regulatory Justification: Include reasoning for the inclusion of AI outputs in the investigation and how these outputs supplement traditional methodologies of deviation triage and root cause analysis.
Review/Approval Flow
The incorporation of AI outputs requires modifications in the standard review and approval workflows. Here’s a generalized flow:
- Preparation of Investigation Report: Following a deviation, the first step is to document the initial investigation using traditional methods, with AI outputs clearly indicated.
- Regulatory Submission: Determine if the submission is a variation or a new application. Use the principles outlined in ICH guidelines to argue the case for AI use, emphasizing supplementary information derived from historical data and CMC (Chemistry, Manufacturing, and Controls) insights.
- Response to Reviewer Queries: Be prepared to justify AI outputs. Anticipate common deficiencies related to data integrity, the robustness of models, and methodologies that support decision-making processes.
- Post-Approval Monitoring: Once approved, monitoring and periodic review of the AI system’s performance are necessary to validate ongoing effectiveness and safety compliance.
Common Deficiencies
During regulatory reviews, common questions and deficiencies often arise concerning AI-enabled investigations. Below are key areas to focus on:
- Data Quality and Integrity: Ensure the underlying data fed into the AI model is of high quality. Regulatory bodies will scrutinize data provenance and its impact on model outputs.
- Justification of AI Use: Clearly justify the necessity and benefit of AI in the investigation process. This includes arguments on efficiency, timeliness, and enhanced analytical capabilities.
- Validation Gaps: Demonstrate thorough validation of AI models, addressing any identified limitations or biases and how they have been mitigated.
RA-Specific Decision Points
RA professionals must navigate critical decision points with regard to AI integration:
1. When to File as Variation vs. New Application
Determine the impact level of AI implementation on existing processes. If the AI model significantly changes the conclusion of the investigation or the interpretation of results, this may warrant a new application. Conversely, if the AI assists but does not alter fundamental conclusions, it may qualify as a variation. Maintain communication with regulatory contacts for clarifications.
2. Justifying Bridging Data
Data linking AI outputs to traditional investigational frameworks can be facilitated through bridging studies. During the justification stage, demonstrate how preliminary results from AI tools can corroborate or challenge conventional findings, thereby providing a fuller picture of the situation. Ensure that all implications for patient safety and product quality remain within regulatory thresholds.
3. Risk Management Assessment
Perform a detailed risk management assessment for the AI model in use. This includes assessing potential risks associated with AI outputs and how these risks align with existing regulatory expectations in Risk Analysis and Management Guidelines (e.g., ICH Q9). Adapt risk mitigation strategies and ensure they are documented throughout the lifecycle stage.
Conclusion
The integration of AI in deviation investigations is presenting new challenges and opportunities for regulatory professionals. Understanding how to leverage AI outputs while satisfying regulatory requirements is paramount. By adhering to guidelines, ensuring robust documentation, and being prepared to address agency concerns, RA professionals can navigate inspections with confidence. In a landscape where AI is becoming increasingly central, regulation professionals must engage proactively to ensure seamless compliance.
For further guidance on regulatory considerations involving AI technology, you may reference the FDA’s guidance on artificial intelligence/machine learning in medical devices.