Published on 04/12/2025
Risk controls to prevent over reliance on AI during investigations
Context
As the pharmaceutical and biotech industries increasingly integrate artificial intelligence (AI) and machine learning (ML) into quality management systems (QMS), regulatory affairs (RA) professionals must ensure these technologies bolster rather than undermine the integrity of deviation investigations and root cause analyses. While AI has the potential to streamline operations, it can also introduce new risks if not effectively managed and governed. Thus, understanding the regulatory landscape surrounding AI-enabled deviation investigations is crucial for compliance and patient safety.
Legal/Regulatory Basis
The use of AI in investigations and root cause analyses is shaped by several key regulatory frameworks and guidelines in the US, EU, and UK, including:
- 21 CFR Part 11: Governs electronic records and electronic signatures in the US, emphasizing data integrity and accountability.
- EU GDPR: Regulates the processing of personal data, impacting the use of AI-driven data analytics methods.
- I CH Guideline Q10: Addresses pharmaceutical quality systems, highlighting the need for robust quality risk management processes that include AI technologies.
- EMA’s Reflection Paper on Digital Health Technologies: Offers guidance on the use of digital technologies including AI in clinical and non-clinical
These regulations create a framework within which AI technologies must operate, ensuring their implementation does not compromise the safety, quality, and efficacy of pharmaceutical products.
Documentation
A comprehensive documentation strategy is essential when implementing AI solutions in deviation investigations. Key documents include:
- AI Validation Report: A thorough assessment of AI models to verify they meet intended uses and regulatory requirements.
- Standard Operating Procedures (SOPs): Detailed descriptions of processes incorporating AI, including validation, monitoring, and usage guidelines.
- Risk Management Plan: Outlines identified risks associated with AI use in investigations, including decision-making thresholds related to AI inputs.
- Training Records: Documentation of personnel training on AI tools and methodologies to ensure qualified use.
- Audit Trail: Captures all interactions with AI systems, ensuring compliance with data integrity requirements from 21 CFR Part 11.
Review/Approval Flow
Initial Assessment and Strategy Development
The initiation of an AI-enabled deviation investigation requires an established workflow that entails:
- Identifying the deviation and its context within the QMS.
- Engaging a cross-functional team that includes members from Quality Assurance (QA), Regulatory Affairs (RA), and IT to assess the appropriate use of AI.
- Developing an AI strategy that specifies its role in the investigation and decision-making process.
Implementation
Once the strategy is established, the implementation process includes:
- Data collection: Gathering relevant data to feed into the AI system for analysis.
- Deviations triage: Utilizing AI to categorize and prioritize deviation investigations based on severity and impact.
- AI model application: Deploying the ML model to identify potential root causes, ensuring oversight from experienced personnel.
Review and Final Approval
Upon completion of the investigation using AI tools, the outputs must undergo review by qualified personnel:
- Evaluation of AI-generated insights and conclusions against existing knowledge and data.
- Assessment of the decision-making process to ensure robust justification for actions taken based on AI inputs.
- Final sign-off by senior quality or regulatory leaders prior to reporting findings.
Common Deficiencies
Efforts to integrate AI into deviation investigations may encounter several common deficiencies, each presenting potential compliance risks:
- Lack of Validation: Failing to validate AI models adequately can result in unreliable outputs leading to misguided conclusions. Validation should be comprehensive, covering model accuracy, robustness, and reproducibility.
- Inadequate Documentation: Poor record-keeping of AI outputs and decision-making can raise red flags during inspections, jeopardizing compliance with regulatory requirements.
- Unclear Roles and Responsibilities: Not defining who is responsible for the oversight and review of AI processes can lead to accountability issues, diluting the significance of human judgment in critical investigations.
- Over-Reliance on Automation: Placing too much trust in AI features can result in automated decisions without sufficient human scrutiny, increasing the risk of undetected quality issues.
RA-Specific Decision Points
As regulatory professionals utilize AI-enabled tools, they must remain vigilant about several decision points that could affect compliance strategy:
When to File as Variation vs. New Application
Understanding the distinction between variations and new applications is vital when integrating AI technologies. If AI tools lead to significant changes in manufacturing or quality procedures impacting product safety, efficacy, or quality, a variation may need to be submitted. Conversely, if the AI system fundamentally alters product formulation or intended use, a new application may be warranted. Determine the regulatory implications early to streamline the review process.
Justifying Bridging Data
When transitioning from traditional processes to AI-driven investigations, providing sufficient bridging data to regulatory authorities justifying the AI system is essential. Bridging data should demonstrate:
- Comparative analysis of traditional investigation results versus AI outcomes to ensure alignment.
- Clear documentation of AI model accuracy and reliability through validation studies.
- Evidence of successful historical performance metrics that underscore AI’s effectiveness in solving compliance issues.
Integration with CMC, Clinical, and Quality Systems
AI’s implementation in deviation investigations must integrate with other critical areas, such as Chemistry, Manufacturing and Controls (CMC), clinical processes, and overall Quality Systems (QS). Close collaboration ensures that:
- Regulatory submissions harmonize with AI-generated insights.
- Clinical assessments appropriately consider data derived from AI analytics.
- Quality systems maintain coherence in operational processes, minimizing risk during audits.
Practical Tips for Documentation and Agency Responses
To optimize effectiveness in regulatory affairs when utilizing AI in deviation investigations, consider the following practical tips:
- Establish a Standard Protocol: Develop standardized methods for documenting all aspects of AI investigations, making it easier to refer back to for audits.
- Regular Training Sessions: Implement training for team members on AI tools, ensuring they understand both the technology and the regulatory implications.
- Engage with Regulatory Bodies Early: Initiate dialogues with regulatory representatives early in the adoption process to clarify expectations and criteria for compliance.
- Conduct Mock Audits: Regularly assess the AI framework’s compliance with existing regulations through internal audits to identify and rectify potential gaps.
- Create Interdisciplinary Review Teams: Form teams that bring together experts from different fields (Quality, Regulatory, Clinical) to collectively evaluate and guide AI investigations.
Conclusion
The integration of AI into deviation investigations offers significant potential for enhancing efficiency and accuracy in quality management. However, to prevent over-reliance on AI, regulatory professionals must embrace a thorough governance framework that includes robust validation, comprehensive documentation, and diligent oversight. By adhering to regulatory expectations and substantiating AI methodologies with effective monitoring and human expertise, organizations can mitigate risks and ensure compliance during the evolving landscape of investigation automation.