Published on 05/12/2025
Case Examples of RWE Rejected Due to RWD Quality Concerns
In recent years, real-world evidence (RWE) has gained prominence in regulatory decision-making processes, steering the pharmaceutical and medical device industries toward evidence-based practices. However, the use of real-world data (RWD) necessitates an unwavering commitment to data quality, integrity, and bias management. This article presents a comprehensive tutorial on RWE, emphasizing case examples where RWD submissions were rejected due to quality concerns, thereby providing regulatory, biostatistics, health economics and outcomes research (HEOR), and data standards professionals with critical insights into fitness for purpose in RWD.
Understanding Real-World Evidence and Real-World Data
Real-world evidence refers to clinical evidence derived from data collected from real-world settings, as opposed to traditional randomized controlled trials. RWD, on the other hand, includes information collected outside of formal clinical trials, such as electronic health records (EHRs), insurance claims data, and patient registries. The
However, the credibility of RWE is dependent on the underlying quality of RWD. Issues such as selection bias, misclassification, and data provenance can significantly impact the validity of RWE findings. Therefore, understanding these elements is crucial for any organization aiming to leverage RWD in regulatory submissions.
Common Quality Concerns in Real-World Data
Several factors contribute to the quality of RWD and, consequently, the reliability of RWE generated from such data. These factors include:
- Fitness for Purpose: RWD must be appropriate for the intended analysis. Data not specifically collected for the research question may not yield relevant insights.
- Selection Bias: If the population from which data is drawn does not adequately represent the target population, results may skew or misinterpret treatment effects.
- Misclassification: Errors in data entry or categorization can lead to incorrect conclusions and erode confidence in the findings.
- Data Provenance: Understanding the source, ownership, and collection methods of data is vital for establishing its credibility and relevance.
- Causal Inference: Traditional statistical methods may not adequately address the complexities of RWD, making causal inference particularly challenging.
Regulatory Expectations for RWD Quality
As highlighted in the FDA guidance, there are specific regulatory expectations for the quality of RWD. Data submissions must be subjected to rigorous scrutiny to ensure reliability and validity. Key regulatory documents emphasize the importance of data quality in the context of RWE applications:
- The FDA’s Framework for FDA’s Real-World Evidence Program
- The International Council for Harmonisation (ICH) E6 Guideline for Good Clinical Practice
- FDA’s guidance on Real-World Data and Real-World Evidence in clinical investigations
These documents collectively underscore the regulatory landscape’s emphasis on data integrity, requiring sponsors to conduct thorough assessments to ensure RWD suffices for its intended purpose.
Case Examples of RWE Rejected Due to RWD Quality Concerns
Examining case examples where RWE has faced rejection due to RWD quality concerns can provide invaluable lessons for organizations. The following scenarios serve as cautionary tales, showcasing the critical importance of adherence to regulatory expectations:
Case Example 1: Study on Drug Efficacy Using Claims Data
A pharmaceutical company submitted a study utilizing RWD derived from insurance claims data to support a drug’s efficacy in treating a specific condition. The analysis revealed significant outcomes showing improved patient results compared to historical controls. However, the FDA rejected the submission due to concerns over selection bias. The claims data did not adequately capture a representative cross-section of the patient population, leading to doubts regarding the generalizability of the findings. Consequently, without correcting for these biases or ensuring a more representative data set, the submission failed to demonstrate adequate RWD quality.
Case Example 2: Misclassification in Electronic Health Records
Another instance involved a clinical study relying on data sourced from electronic health records (EHRs) to assess the long-term safety of a medical device. The generated RWE indicated a lower rate of adverse events than observed in controlled settings. The FDA raised concerns regarding misclassification of patient statuses in the EHRs, particularly due to variability in physician documentation practices. This inconsistency undermined the robustness of the safety claims, resulting in a rejection until the sponsor could provide a comprehensive review of data provenance and rectify classification issues.
Case Example 3: Lack of Causal Inference Methodologies
A company seeking to establish a causal link between a treatment and improved outcomes provided data from a patient registry. While the initial analysis demonstrated a correlation, the FDA noted the absence of appropriate causal inference methodologies to support the claims made. RWD without methodological rigor cannot reliably inform conclusions about treatment effects. Here, the failure to follow recommended statistical approaches, including propensity score matching or instrumental variable analysis, led to a rejection based on inadequate demonstration of data quality and integrity.
Strategies for Ensuring RWD Quality
To avoid similar pitfalls, organizations must implement robust strategies to enhance the quality of RWD and ensure its integrity. These strategies include:
- Comprehensive Data Management Plans: Develop and maintain data management plans that outline data collection methodologies, sources, and analysis plans, ensuring consistency and reliability across study endpoints.
- Bias Mitigation Techniques: Employ statistical techniques to identify and mitigate selection bias and other forms of bias that could compromise the findings. Consider sensitivity analyses to assess how changes in data impact results.
- Training on Data Entry & Classification: Invest in training personnel on accurate data entry and classification to minimize errors and misclassification risks.
- Engagement with Regulatory Bodies: Foster open communication with regulators during study design phases to seek guidance regarding data collection strategies and relevance of planned analyses.
- Leveraging Technology: Utilize advanced analytics and machine learning techniques to enhance data interpretation and identify potential biases in RWD datasets.
Conclusion: The Path Forward for RWD Quality Integrity
As the integration of RWE into regulatory frameworks continues to evolve, ensuring the quality, integrity, and appropriate management of bias in RWD will remain critical. This article has outlined essential considerations for regulatory affairs professionals, highlighting case examples of RWE rejections and providing actionable strategies to uphold RWD quality standards. By implementing rigorous quality assurance measures, collaborating with regulatory authorities, and embracing advanced analytical techniques, organizations can enhance their ability to submit robust and credible RWE supporting their products’ efficacy and safety.
Investing efforts in addressing data quality concerns not only aligns with regulatory mandates but also strengthens the overall credibility of RWE as a valuable component in evidence-based healthcare decision-making.