Purpose: Blinded Independent Central Review (BICR) is a well-accepted method employed in many oncology registration trials. Ongoing monitoring radiologist “reader” performance is both good clinical trial practice and a requirement by regulatory authorities. We continue to use Reader Disagreement Index (RDI) as an important measure in BICR. In this work we studied RDI as an early indicator to identify an outlier reader during the monitoring of reader performance in BICR. Early indication would enable early intervention and thus possibly improve trial outcomes.
Methods: We performed a retrospective analysis of readers’ RDIs in nineteen different clinical trials. Ninety-two reader performances were examined at five intervals in each trial. These individual trial reviews were conducted by forty-three board-certified radiologist readers using several established imaging assessment trial criteria. The objective was to see how well RDI performance above a threshold at progressive monitoring intervals would “flag” a potential overall end-point performance “issue” for that specific reader.
Results: We present results for the prediction of exceeding threshold (one standard deviation above a study mean RDI). Sensitivity, Specificity, Positive Predictive Value (PPV) and Negative Predictive Value (NPV) were determined for the predicted performance outcomes. We explored interpreting multiple “flags” for each trial to improve the aforementioned metrics.
Conclusions: One would expect that a “flag” of RDI exceeding threshold at an early-stage would likely give a useful prediction of end-point reader performance. We refined our methods to use multiple flags which enable statistically improved Specificity and PPV. Improved predictive capability at early stage intervals coupled with persistent monitoring across subsequent intervals will enable trial managers to focus on specific readers. An earlier indication of possible reader performance issues can permit proactive intervention and enhance good trial practices.
Purpose: Blinded Independent Central Review (BICR) is highly recommended by regulatory authorities for oncology registration trials. “Adjudication rate” provided by “Two Reviewers and Adjudicator Paradigm” of BICR has been part of reviewer performance metrics and trial efficacy. However, adjudication rate does not consider the adjudicator agreement or disagreement rate of a reviewer. We analyzed that Reader Disagreement Index (RDI) is a better measure than adjudication rate to monitor reviewer performance in BICR. Methods: BICR adjudication data from 3 different clinical trials including 10 board-certified radiologist reviewers using Response Evaluation Criteria in Solid Tumors (RECIST) 1.1 criteria was analyzed. RDI for each reviewer was calculated using the below mentioned formula. Reviewer adjudication rate and adjudicator agreement rate was calculated for each reviewer along with RDI. RDI was used to identify the discordant reviewer with highest disagreement rate. Number of cases where adjudicator disagreed with given reader RDI (%) = Total number of cases read ×100 Results: RDI identified the discordant reviewer in all 3 studies. Discordant reviewers identified using RDI were not the reviewers with highest adjudication or lowest agreement rates. Adjudication rate identified the discordant reviewer in 1 of the 3 studies. Reviewer with lowest adjudicator agreement could not have been identified as discordant reviewer using only adjudication rate in monitoring reviewer performance. RDI is more robust in identifying a discordant reviewer who neither has highest adjudication nor lowest agreement rate. Conclusions: RDI is more reliable measure of reviewer performance as compared to adjudication rate and could be effectively used to monitor reviewer performance as it combines both reviewer adjudication percentage and adjudication agreement percentage.
Purpose: To develop novel monitoring methods in Blinded Independent Central Review (BICR) imaging trials in which two radiologist reviewers assess the same images. In this project we aimed to ‘flag’ any reviewer that might have an assessment bias compared to the assessments of other reviewers on a specific study. Methods: Retrospective data analysis using R programming scripts was used to evaluate discordant assessments between two reviewers. We use a binomial test to determine the probability that an estimated low adjudication agreement rate is statistically less than the expected rate for all reviewer discordant assessment pairs. Results: We determined that for five or more discordant cases we can calculate the probability that each individual reviewer might have a statistically significant probability of low adjudication agreement for each discordant pair of assessments. We then analyzed the assessment data for sixteen oncological BICR clinical trials. Conclusions: The basic methods described can ‘flag’ or ‘signal’ a potential assessment ‘bias’. Although we initially focused on studies following one published clinical trial criteria to evaluate solid tumor we have applied the methods to other oncological studies with different published criteria which also may require double radiological reviews.
Purpose: Perform a retrospective review of a number studies (n=20) for the purpose of proposing basic likelihood metrics for evaluation of discordance between two reviewers performing RECIST (Response Evaluation Criteria in Solid Tumors) assessments in a Blinded Independent Central Review (BICR)
Methods: Retrospective data analysis using R programming scripts to determine discordance subsets of interest and analyze these datasets for both time point discordance and case discordance.
Results: We present a basic time point discordant ratio and a cases discordance ratio based on a range of aggregated time points per case for RECIST datasets.
Conclusions: We propose basic ratios that that might be useful to improve reviewer performance monitoring models
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.