Supplier assessments and audits focused on AI data integrity controls

Supplier assessments and audits focused on AI data integrity controls

Published on 03/12/2025

Supplier assessments and audits focused on AI data integrity controls

Regulatory Affairs Context

In the evolving landscape of pharmaceutical and biotech sectors, the integration of Artificial Intelligence (AI) into quality systems presents unique challenges and opportunities. The regulatory framework governing these technologies, particularly in relation to data governance, validation, and compliance under 21 CFR Part 11, is critical for safeguarding data integrity and ensuring patient safety. Regulatory agencies such as the FDA, EMA, and MHRA have established guidelines that must be adhered to when integrating AI into quality assurance (QA) processes, impacting how organizations perform supplier assessments and audits.

Legal/Regulatory Basis

Data governance within the domain of AI must comply with a variety of regulations, including:

  • 21 CFR Part 11: This regulation addresses electronic records and electronic signatures. It establishes the criteria under which electronic records and signatures are considered trustworthy, reliable, and equivalent to traditional paper records.
  • Annex 11: This section of the EU GMP guidelines focuses on the validation of computer systems, including AI applications, stressing that systems should be validated for their intended use and comply with GDPR when handling personal data.
  • ICH Guidelines: The International Council for
Harmonisation has established guidelines that influence data integrity and management throughout the product lifecycle, particularly in clinical trials and Good Manufacturing Practices.

Documentation Requirements

Effective supplier assessments and audits necessitate comprehensive documentation to meet regulatory standards. Key documentation elements include:

  • Supplier Qualification Files: These should include evidence of the supplier’s compliance with regulatory requirements, previous performance evaluations, and quality system certifications.
  • Risk Assessment Reports: A thorough risk analysis should address potential risks associated with AI implementation and how these risks will be managed or mitigated.
  • Validation Plans: These should outline the approach to validating AI applications within the quality system, detailing methodologies for both initial validation and continuous monitoring.
  • Data Integrity Assessments: Documentation must demonstrate how data integrity is maintained across AI systems, ensuring compliance with both 21 CFR Part 11 and Annex 11.

Review/Approval Flow

The review and approval process for AI-related data governance must be clearly defined and should include the following steps:

  1. Initial Assessment: Conduct a preliminary evaluation of the supplier’s AI systems to determine potential risks and confirm readiness for audit.
  2. Detailed Audit: Perform an in-depth audit of the supplier’s systems, focusing on compliance with regulatory requirements, including data integrity controls and system validation.
  3. Documentation Review: Ensure that all necessary documentation is in place and meets regulatory expectations, including evidence of system validation and risk management strategies.
  4. Approval Decision: Issue a formal approval or recommendations for corrective actions based on findings from the audit.

Common Deficiencies

Organizations often encounter specific deficiencies during supplier assessments and audits focused on AI data integrity controls. These include:

  • Inadequate Validation: Failing to validate the AI system properly, particularly regarding its algorithms and output integrity.
  • Insufficient Documentation: Lack of adequate records demonstrating compliance with 21 CFR Part 11 and insufficient details in validation reports.
  • Poor Data Governance Practices: Inconsistent application of data governance principles, leading to potential data integrity issues.
  • Unclear Roles and Responsibilities: Ambiguities regarding who is responsible for different aspects of data governance and compliance can lead to accountability gaps.

RA-Specific Decision Points

When to File as a Variation vs. New Application

Decisions around modifications in AI systems can have significant regulatory implications. Organizations must distinguish between filing for variations or new applications based on:

  • Nature of Change: If the modification in the AI system impacts the intended use or significantly alters its performance, it may warrant a new application.
  • Scope of Use: Changes applicable to existing systems or operational contexts might only require a variation.
  • Regulatory Feedback: Engaging with regulatory bodies early in the process can clarify whether a modification requires a new submission or can be addressed through a variation.

How to Justify Bridging Data

Bridging data are essential when there are gaps in studies or when utilizing data from different sources. Justifications for using bridging data might include:

  • Scientific Rationale: Clear scientific justification for bridging data based on recognized standards and practices.
  • Regulatory Precedent: Evidence of prior approvals utilizing similar bridging data can bolster the case.
  • Robust Statistical Models: Employing validated statistical methods and models to demonstrate the reliability of the bridging data supporting the AI application.

Interactions with CMC, Clinical, PV, QA, and Commercial Teams

Collaboration between Regulatory Affairs and other departments is crucial to ensure compliance across all functions influenced by AI and data governance. Key interactions include:

  • Chemical, Manufacturing, and Controls (CMC): Close coordination with CMC teams is essential to validate that manufacturing processes utilizing AI adhere to regulatory requirements.
  • Clinical Teams: Clinical trial data management must ensure that AI systems utilized for data handling, analysis, and reporting comply with prescribed data integrity standards.
  • Pharmacovigilance (PV): AI may analyze post-marketing data; thus, clear regulations around data governance and reporting must be established with PV teams.
  • Quality Assurance (QA): QA teams should evaluate AI tools within the broader quality framework, ensuring compliance with relevant standards.
  • Commercial Teams: Awareness of regulatory requirements must extend to commercial teams when utilizing AI in marketing and sales strategies.

Practical Tips for Documentation and Justifications

To navigate regulatory expectations successfully in the context of AI and data governance, consider the following practical tips:

  • Develop a Comprehensive Validation Framework: Establish a structured approach for validating AI applications, including defining success criteria and periodic review procedures.
  • Maintain Consistent Communication: Regular discussions across departments can foster an understanding of compliance issues and prevent bottlenecks during audits.
  • Stay Updated on Regulatory Changes: Keep abreast of evolving regulations and guidelines to ensure proactive compliance strategies.
  • Implement Robust Data Management Practices: Design controls to ensure the integrity of data generated by AI systems, encompassing a well-structured data governance policy.

In conclusion, effective supplier assessments and audits focused on AI data integrity controls are fundamental in establishing trustworthy systems conducive to regulatory compliance. Organizations in the pharma and biotech sectors must align their practices with the outlined regulations and develop holistic strategies that address the intricacies of data governance as mandated by regulatory authorities such as FDA, ICH Guidelines, and MHRA as they strive to uphold the highest standards of quality and compliance.

See also  ALCOA plus and contemporaneous recording expectations during inspections