Tags

Type your tag names separated by a space and hit enter

Quality assurance in radiology: peer review and peer feedback.
Clin Radiol. 2015 Nov; 70(11):1158-64.CR

Abstract

Peer review in radiology means an assessment of the accuracy of a report issued by another radiologist. Inevitably, this involves a judgement opinion from the reviewing radiologist. Peer feedback is the means by which any form of peer review is communicated back to the original author of the report. This article defines terms, discusses the current status, identifies problems, and provides some recommendations as to the way forward, concentrating upon the software requirements for efficient peer review and peer feedback of reported imaging studies. Radiologists undertake routine peer review in their everyday clinical practice, particularly when reporting and preparing for multidisciplinary team meetings. More formal peer review of reported imaging studies has been advocated as a quality assurance measure to promote good clinical practice. It is also a way of assessing the competency of reporting radiologists referred for investigation to bodies such as the General Medical Council (GMC). The literature shows, firstly, that there is a very wide reported range of discrepancy rates in many studies, which have used a variety of non-comparable methodologies; and secondly, that applying scoring systems in formal peer review is often meaningless, unhelpful, and can even be detrimental. There is currently a lack of electronic peer feedback system software on the market to inform radiologists of any review of their work that has occurred or to provide them with clinical outcome information on cases they have previously reported. Learning opportunities are therefore missed. Radiologists should actively engage with the medical informatics industry to design optimal peer review and feedback software with features to meet their needs. Such a system should be easy to use, be fully integrated with the radiological information and picture archiving systems used clinically, and contain a free-text comment box, without a numerical scoring system. It should form a temporary record that cannot be permanently archived. It must provide automated feedback to the original author. Peer feedback, as part of everyday reporting, should enhance daily learning for radiologists. Software requirements for everyday peer feedback differ from those needed for a formal peer review process, which might only be necessary in the setting of a formal GMC enquiry into a particular radiologist's reporting competence, for example.

Authors+Show Affiliations

Imaging Department, Hammersmith Hospital, Imperial College Healthcare NHS Trust, Du Cane Road, London W12 0HS, UK. Electronic address: nicola.strickland@imperial.nhs.uk.

Pub Type(s)

Journal Article
Review

Language

eng

PubMed ID

26223739

Citation

Strickland, N H.. "Quality Assurance in Radiology: Peer Review and Peer Feedback." Clinical Radiology, vol. 70, no. 11, 2015, pp. 1158-64.
Strickland NH. Quality assurance in radiology: peer review and peer feedback. Clin Radiol. 2015;70(11):1158-64.
Strickland, N. H. (2015). Quality assurance in radiology: peer review and peer feedback. Clinical Radiology, 70(11), 1158-64. https://doi.org/10.1016/j.crad.2015.06.091
Strickland NH. Quality Assurance in Radiology: Peer Review and Peer Feedback. Clin Radiol. 2015;70(11):1158-64. PubMed PMID: 26223739.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Quality assurance in radiology: peer review and peer feedback. A1 - Strickland,N H, Y1 - 2015/07/26/ PY - 2015/03/29/received PY - 2015/06/17/revised PY - 2015/06/25/accepted PY - 2015/7/31/entrez PY - 2015/8/1/pubmed PY - 2016/1/5/medline SP - 1158 EP - 64 JF - Clinical radiology JO - Clin Radiol VL - 70 IS - 11 N2 - Peer review in radiology means an assessment of the accuracy of a report issued by another radiologist. Inevitably, this involves a judgement opinion from the reviewing radiologist. Peer feedback is the means by which any form of peer review is communicated back to the original author of the report. This article defines terms, discusses the current status, identifies problems, and provides some recommendations as to the way forward, concentrating upon the software requirements for efficient peer review and peer feedback of reported imaging studies. Radiologists undertake routine peer review in their everyday clinical practice, particularly when reporting and preparing for multidisciplinary team meetings. More formal peer review of reported imaging studies has been advocated as a quality assurance measure to promote good clinical practice. It is also a way of assessing the competency of reporting radiologists referred for investigation to bodies such as the General Medical Council (GMC). The literature shows, firstly, that there is a very wide reported range of discrepancy rates in many studies, which have used a variety of non-comparable methodologies; and secondly, that applying scoring systems in formal peer review is often meaningless, unhelpful, and can even be detrimental. There is currently a lack of electronic peer feedback system software on the market to inform radiologists of any review of their work that has occurred or to provide them with clinical outcome information on cases they have previously reported. Learning opportunities are therefore missed. Radiologists should actively engage with the medical informatics industry to design optimal peer review and feedback software with features to meet their needs. Such a system should be easy to use, be fully integrated with the radiological information and picture archiving systems used clinically, and contain a free-text comment box, without a numerical scoring system. It should form a temporary record that cannot be permanently archived. It must provide automated feedback to the original author. Peer feedback, as part of everyday reporting, should enhance daily learning for radiologists. Software requirements for everyday peer feedback differ from those needed for a formal peer review process, which might only be necessary in the setting of a formal GMC enquiry into a particular radiologist's reporting competence, for example. SN - 1365-229X UR - https://www.unboundmedicine.com/medline/citation/26223739/Quality_assurance_in_radiology:_peer_review_and_peer_feedback_ L2 - https://linkinghub.elsevier.com/retrieve/pii/S0009-9260(15)00284-6 DB - PRIME DP - Unbound Medicine ER -