Tags

Type your tag names separated by a space and hit enter

RADPEER quality assurance program: a multifacility study of interpretive disagreement rates.
J Am Coll Radiol. 2004 Jan; 1(1):59-65.JA

Abstract

PURPOSE

To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for "evaluation of performance in practice" that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification.

METHOD

RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002.

RESULTS

Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9% of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8% of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P < .05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice.

CONCLUSIONS

A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above.

Authors+Show Affiliations

Colorado Springs, CO 80908-3239, USA. borgrad@aol.comNo affiliation info availableNo affiliation info availableNo affiliation info available

Pub Type(s)

Journal Article
Multicenter Study

Language

eng

PubMed ID

17411521

Citation

Borgstede, James P., et al. "RADPEER Quality Assurance Program: a Multifacility Study of Interpretive Disagreement Rates." Journal of the American College of Radiology : JACR, vol. 1, no. 1, 2004, pp. 59-65.
Borgstede JP, Lewis RS, Bhargavan M, et al. RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. J Am Coll Radiol. 2004;1(1):59-65.
Borgstede, J. P., Lewis, R. S., Bhargavan, M., & Sunshine, J. H. (2004). RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. Journal of the American College of Radiology : JACR, 1(1), 59-65.
Borgstede JP, et al. RADPEER Quality Assurance Program: a Multifacility Study of Interpretive Disagreement Rates. J Am Coll Radiol. 2004;1(1):59-65. PubMed PMID: 17411521.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. AU - Borgstede,James P, AU - Lewis,Rebecca S, AU - Bhargavan,Mythreyi, AU - Sunshine,Jonathan H, PY - 2007/4/7/pubmed PY - 2007/8/24/medline PY - 2007/4/7/entrez SP - 59 EP - 65 JF - Journal of the American College of Radiology : JACR JO - J Am Coll Radiol VL - 1 IS - 1 N2 - PURPOSE: To develop and test a radiology peer review system that adds minimally to workload, is confidential, uniform across practices, and provides useful information to meet the mandate for "evaluation of performance in practice" that is forthcoming from the American Board of Medical Specialties as one of the four elements of maintenance of certification. METHOD: RADPEER has radiologists who review previous images as part of a new interpretation record their ratings of the previous interpretations on a 4-point scale. Reviewing radiologists' ratings of 3 and 4 (disagreements in nondifficult cases) are reviewed by a peer review committee in each practice to judge whether they are misinterpretations by the original radiologists. Final ratings are sent for central data entry and analysis. A pilot test of RADPEER was conducted in 2002. RESULTS: Fourteen facilities participated in the pilot test, submitting a total of 20,286 cases. Disagreements in difficult cases (ratings of 2) averaged 2.9% of all cases. Committee-validated misinterpretations in nondifficult cases averaged 0.8% of all cases. There were considerable differences by modality. There were substantial differences across facilities; few of these differences were explicable by mix of modalities, facility size or type, or being early or late in the pilot test. Of 31 radiologists who interpreted over 200 cases, 2 had misinterpretation rates significantly (P < .05) above what would be expected given their individual mix of modalities and the average misinterpretation rate for each modality in their practice. CONCLUSIONS: A substantial number of facilities participated in the pilot test, and all maintained their participation throughout the year. Data generated are useful for the peer review of individual radiologists and for showing differences by modality. RADPEER is now operational and is a good solution to the need for a peer review system with the desirable characteristics listed above. SN - 1558-349X UR - https://www.unboundmedicine.com/medline/citation/17411521/RADPEER_quality_assurance_program:_a_multifacility_study_of_interpretive_disagreement_rates_ L2 - https://linkinghub.elsevier.com/retrieve/pii/S1546-1440(03)00002-4 DB - PRIME DP - Unbound Medicine ER -